Test Report: Docker_Linux_containerd 13812

                    
                      afb3956fdbde357e4baa0f8617bfd5a64bad6558:2022-03-25:23180
                    
                

Test fail (17/267)

x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (501.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (8m19.462655006s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220325015306-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20220325015306-262786 in cluster old-k8s-version-20220325015306-262786
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220325015306-262786" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 01:53:06.744250  431164 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:53:06.744362  431164 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:53:06.744372  431164 out.go:310] Setting ErrFile to fd 2...
	I0325 01:53:06.744376  431164 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:53:06.744486  431164 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:53:06.744811  431164 out.go:304] Setting JSON to false
	I0325 01:53:06.746140  431164 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16259,"bootTime":1648156928,"procs":594,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:53:06.746212  431164 start.go:125] virtualization: kvm guest
	I0325 01:53:06.886302  431164 out.go:176] * [old-k8s-version-20220325015306-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 01:53:06.886522  431164 notify.go:193] Checking for updates...
	I0325 01:53:07.082946  431164 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 01:53:07.097889  431164 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 01:53:07.100205  431164 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:53:07.101930  431164 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 01:53:07.103536  431164 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 01:53:07.104276  431164 config.go:176] Loaded profile config "auto-20220325014919-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:53:07.104454  431164 config.go:176] Loaded profile config "kubernetes-upgrade-20220325015003-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 01:53:07.104578  431164 config.go:176] Loaded profile config "running-upgrade-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0325 01:53:07.104639  431164 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 01:53:07.153836  431164 docker.go:136] docker version: linux-20.10.14
	I0325 01:53:07.153956  431164 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:53:07.263132  431164 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:69 SystemTime:2022-03-25 01:53:07.188979319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:53:07.263257  431164 docker.go:253] overlay module found
	I0325 01:53:07.267007  431164 out.go:176] * Using the docker driver based on user configuration
	I0325 01:53:07.267047  431164 start.go:284] selected driver: docker
	I0325 01:53:07.267053  431164 start.go:801] validating driver "docker" against <nil>
	I0325 01:53:07.267074  431164 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 01:53:07.267123  431164 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 01:53:07.267144  431164 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 01:53:07.268782  431164 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 01:53:07.269411  431164 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:53:07.379145  431164 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:69 SystemTime:2022-03-25 01:53:07.305618135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:53:07.379310  431164 start_flags.go:290] no existing cluster config was found, will generate one from the flags 
	I0325 01:53:07.379511  431164 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 01:53:07.379535  431164 cni.go:93] Creating CNI manager for ""
	I0325 01:53:07.379542  431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 01:53:07.379550  431164 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 01:53:07.379559  431164 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 01:53:07.379565  431164 start_flags.go:299] Found "CNI" CNI - setting NetworkPlugin=cni
	I0325 01:53:07.379578  431164 start_flags.go:304] config:
	{Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:53:07.403164  431164 out.go:176] * Starting control plane node old-k8s-version-20220325015306-262786 in cluster old-k8s-version-20220325015306-262786
	I0325 01:53:07.403218  431164 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 01:53:07.405626  431164 out.go:176] * Pulling base image ...
	I0325 01:53:07.405667  431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 01:53:07.405710  431164 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0325 01:53:07.405726  431164 cache.go:57] Caching tarball of preloaded images
	I0325 01:53:07.405760  431164 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 01:53:07.405992  431164 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 01:53:07.406016  431164 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0325 01:53:07.406151  431164 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json ...
	I0325 01:53:07.406184  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json: {Name:mk5e2f006e0e19c174c7a53c7f043140e531ad14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:53:07.454855  431164 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 01:53:07.454890  431164 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 01:53:07.454919  431164 cache.go:208] Successfully downloaded all kic artifacts
	I0325 01:53:07.454984  431164 start.go:348] acquiring machines lock for old-k8s-version-20220325015306-262786: {Name:mk6f712225030023aec99b26d6c356d6d62f23e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 01:53:07.455134  431164 start.go:352] acquired machines lock for "old-k8s-version-20220325015306-262786" in 113.509µs
	I0325 01:53:07.455167  431164 start.go:90] Provisioning new machine with config: &{Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:53:07.455280  431164 start.go:127] createHost starting for "" (driver="docker")
	I0325 01:53:07.457995  431164 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0325 01:53:07.458326  431164 start.go:161] libmachine.API.Create for "old-k8s-version-20220325015306-262786" (driver="docker")
	I0325 01:53:07.458370  431164 client.go:168] LocalClient.Create starting
	I0325 01:53:07.458463  431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 01:53:07.458523  431164 main.go:130] libmachine: Decoding PEM data...
	I0325 01:53:07.458550  431164 main.go:130] libmachine: Parsing certificate...
	I0325 01:53:07.458632  431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 01:53:07.458659  431164 main.go:130] libmachine: Decoding PEM data...
	I0325 01:53:07.458681  431164 main.go:130] libmachine: Parsing certificate...
	I0325 01:53:07.459176  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 01:53:07.499630  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 01:53:07.499703  431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
	I0325 01:53:07.499732  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
	W0325 01:53:07.540491  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:53:07.540532  431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220325015306-262786
	I0325 01:53:07.540563  431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220325015306-262786
	
	** /stderr **
	I0325 01:53:07.540653  431164 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:53:07.596601  431164 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-23ae52b3b8f2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b7:bb:c1:32}}
	I0325 01:53:07.597575  431164 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-16647239848e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:47:fb:23:78}}
	I0325 01:53:07.598613  431164 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc00013e8e8] misses:0}
	I0325 01:53:07.598656  431164 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 01:53:07.598673  431164 network_create.go:106] attempt to create docker network old-k8s-version-20220325015306-262786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0325 01:53:07.598722  431164 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220325015306-262786
	I0325 01:53:07.736169  431164 network_create.go:90] docker network old-k8s-version-20220325015306-262786 192.168.67.0/24 created
	I0325 01:53:07.736216  431164 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20220325015306-262786" container
	I0325 01:53:07.736267  431164 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 01:53:07.777633  431164 cli_runner.go:133] Run: docker volume create old-k8s-version-20220325015306-262786 --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 01:53:07.813476  431164 oci.go:102] Successfully created a docker volume old-k8s-version-20220325015306-262786
	I0325 01:53:07.813560  431164 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20220325015306-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --entrypoint /usr/bin/test -v old-k8s-version-20220325015306-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 01:53:09.710810  431164 cli_runner.go:186] Completed: docker run --rm --name old-k8s-version-20220325015306-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --entrypoint /usr/bin/test -v old-k8s-version-20220325015306-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (1.897172429s)
	I0325 01:53:09.710849  431164 oci.go:106] Successfully prepared a docker volume old-k8s-version-20220325015306-262786
	I0325 01:53:09.710897  431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 01:53:09.710924  431164 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 01:53:09.711017  431164 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 01:53:18.109802  431164 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.398715615s)
	I0325 01:53:18.109852  431164 kic.go:188] duration metric: took 8.398924 seconds to extract preloaded images to volume
	W0325 01:53:18.109888  431164 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 01:53:18.109898  431164 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 01:53:18.109956  431164 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 01:53:18.228621  431164 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	W0325 01:53:18.309373  431164 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 returned with exit code 125
	I0325 01:53:18.309445  431164 client.go:171] LocalClient.Create took 10.851065706s
	I0325 01:53:20.309747  431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:53:20.309841  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:20.349287  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:53:20.349415  431164 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:20.625849  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:20.665234  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:53:20.665363  431164 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:21.206079  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:21.246889  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:53:21.247027  431164 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:21.902781  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:21.939759  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	W0325 01:53:21.939865  431164 start.go:277] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0325 01:53:21.939880  431164 start.go:244] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:21.939912  431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 01:53:21.939940  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:21.972264  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:53:21.972406  431164 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:22.203802  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:22.236559  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:53:22.236676  431164 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:22.681982  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:22.711090  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:53:22.711189  431164 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:23.029697  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:23.061316  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:53:23.061407  431164 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:23.616238  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	W0325 01:53:23.646212  431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
	W0325 01:53:23.646343  431164 start.go:292] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0325 01:53:23.646363  431164 start.go:249] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0325 01:53:23.646392  431164 start.go:130] duration metric: createHost completed in 16.191098684s
	I0325 01:53:23.646401  431164 start.go:81] releasing machines lock for "old-k8s-version-20220325015306-262786", held for 16.191256374s
	W0325 01:53:23.646435  431164 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e0
5ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220325015306-262786 not found.
	I0325 01:53:23.646876  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	W0325 01:53:23.674964  431164 start.go:575] delete host: Docker machine "old-k8s-version-20220325015306-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0325 01:53:23.675199  431164 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da
728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220325015306-262786 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55
218347b5: exit status 125
	stdout:
	70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220325015306-262786 not found.
	
	I0325 01:53:23.675224  431164 start.go:585] Will try again in 5 seconds ...
	I0325 01:53:28.676153  431164 start.go:348] acquiring machines lock for old-k8s-version-20220325015306-262786: {Name:mk6f712225030023aec99b26d6c356d6d62f23e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 01:53:28.676313  431164 start.go:352] acquired machines lock for "old-k8s-version-20220325015306-262786" in 115.05µs
	I0325 01:53:28.676353  431164 start.go:94] Skipping create...Using existing machine configuration
	I0325 01:53:28.676363  431164 fix.go:55] fixHost starting: 
	I0325 01:53:28.676888  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:28.723054  431164 fix.go:108] recreateIfNeeded on old-k8s-version-20220325015306-262786: state= err=<nil>
	I0325 01:53:28.723096  431164 fix.go:113] machineExists: false. err=machine does not exist
	I0325 01:53:28.724752  431164 out.go:176] * docker "old-k8s-version-20220325015306-262786" container is missing, will recreate.
	I0325 01:53:28.724781  431164 delete.go:124] DEMOLISHING old-k8s-version-20220325015306-262786 ...
	I0325 01:53:28.724842  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:28.756595  431164 stop.go:79] host is in state 
	I0325 01:53:28.756631  431164 main.go:130] libmachine: Stopping "old-k8s-version-20220325015306-262786"...
	I0325 01:53:28.756698  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:28.789590  431164 kic_runner.go:93] Run: systemctl --version
	I0325 01:53:28.789616  431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 systemctl --version]
	I0325 01:53:28.830431  431164 kic_runner.go:93] Run: sudo service kubelet stop
	I0325 01:53:28.830456  431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 sudo service kubelet stop]
	I0325 01:53:28.875260  431164 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
	
	** /stderr **
	W0325 01:53:28.875281  431164 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
	I0325 01:53:28.875341  431164 kic_runner.go:93] Run: sudo service kubelet stop
	I0325 01:53:28.875353  431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 sudo service kubelet stop]
	I0325 01:53:28.939087  431164 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
	
	** /stderr **
	W0325 01:53:28.939115  431164 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
	I0325 01:53:28.939136  431164 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0325 01:53:28.939214  431164 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0325 01:53:28.939238  431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0325 01:53:28.981135  431164 kic.go:456] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
	I0325 01:53:28.981166  431164 kic.go:466] successfully stopped kubernetes!
	I0325 01:53:28.981217  431164 kic_runner.go:93] Run: pgrep kube-apiserver
	I0325 01:53:28.981227  431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 pgrep kube-apiserver]
	I0325 01:53:29.085088  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:32.136545  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:35.171131  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:38.206727  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:41.242333  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:44.276035  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:47.317900  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:50.363044  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:53.398845  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:56.467091  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:53:59.511115  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:02.556552  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:05.591089  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:08.645602  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:11.683108  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:14.736042  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:17.769080  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:20.804717  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:23.851088  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:26.885627  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:29.920168  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:32.955019  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:35.989701  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:39.039107  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:42.070890  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:45.104461  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:48.139081  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:51.171105  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:54.203995  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:57.236361  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:00.275444  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:03.334100  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:06.368209  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:09.407114  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:12.439881  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:15.478625  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:18.513302  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:21.545857  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:24.580164  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:27.612926  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:30.649664  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:33.683100  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:36.715325  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:39.751091  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:42.785301  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:45.821589  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:48.854113  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:51.885844  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:54.919097  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:57.951168  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:00.986964  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:04.022375  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:07.054544  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:10.088923  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:13.121694  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:16.158628  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:19.193066  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:22.229496  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:25.263135  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:28.299080  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:31.335036  431164 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0325 01:56:31.335082  431164 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0325 01:56:31.335570  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	W0325 01:56:31.369049  431164 delete.go:135] deletehost failed: Docker machine "old-k8s-version-20220325015306-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0325 01:56:31.369136  431164 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786
	I0325 01:56:31.404692  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:31.436643  431164 cli_runner.go:133] Run: docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0"
	W0325 01:56:31.469236  431164 cli_runner.go:180] docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0" returned with exit code 1
	I0325 01:56:31.469271  431164 oci.go:659] error shutdown old-k8s-version-20220325015306-262786: docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
	I0325 01:56:32.470272  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:32.503561  431164 oci.go:673] temporary error: container old-k8s-version-20220325015306-262786 status is  but expect it to be exited
	I0325 01:56:32.503590  431164 oci.go:679] Successfully shutdown container old-k8s-version-20220325015306-262786
	I0325 01:56:32.503641  431164 cli_runner.go:133] Run: docker rm -f -v old-k8s-version-20220325015306-262786
	I0325 01:56:32.540810  431164 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786
	W0325 01:56:32.570903  431164 cli_runner.go:180] docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:56:32.571005  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 01:56:32.601633  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 01:56:32.601695  431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
	I0325 01:56:32.601719  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
	W0325 01:56:32.632633  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:56:32.632663  431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220325015306-262786
	I0325 01:56:32.632678  431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220325015306-262786
	
	** /stderr **
	W0325 01:56:32.632818  431164 delete.go:139] delete failed (probably ok) <nil>
	I0325 01:56:32.632831  431164 fix.go:120] Sleeping 1 second for extra luck!
	I0325 01:56:33.633777  431164 start.go:127] createHost starting for "" (driver="docker")
	I0325 01:56:33.636953  431164 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0325 01:56:33.637111  431164 start.go:161] libmachine.API.Create for "old-k8s-version-20220325015306-262786" (driver="docker")
	I0325 01:56:33.637158  431164 client.go:168] LocalClient.Create starting
	I0325 01:56:33.637270  431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 01:56:33.637315  431164 main.go:130] libmachine: Decoding PEM data...
	I0325 01:56:33.637341  431164 main.go:130] libmachine: Parsing certificate...
	I0325 01:56:33.637420  431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 01:56:33.637448  431164 main.go:130] libmachine: Decoding PEM data...
	I0325 01:56:33.637471  431164 main.go:130] libmachine: Parsing certificate...
	I0325 01:56:33.637805  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 01:56:33.670584  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 01:56:33.670681  431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
	I0325 01:56:33.670699  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
	W0325 01:56:33.700818  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:56:33.700851  431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220325015306-262786
	I0325 01:56:33.700871  431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220325015306-262786
	
	** /stderr **
	I0325 01:56:33.700917  431164 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:56:33.731365  431164 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcb21d43dbbf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:db:45:ae:c5}}
	I0325 01:56:33.732243  431164 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-a040cc4bab62 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d0:f2:08:b6}}
	I0325 01:56:33.733015  431164 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-12bda0d2312e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:de:32:64:a8}}
	I0325 01:56:33.733812  431164 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc00013e8e8 192.168.76.0:0xc000702388] misses:0}
	I0325 01:56:33.733853  431164 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 01:56:33.733877  431164 network_create.go:106] attempt to create docker network old-k8s-version-20220325015306-262786 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0325 01:56:33.733929  431164 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220325015306-262786
	I0325 01:56:33.801121  431164 network_create.go:90] docker network old-k8s-version-20220325015306-262786 192.168.76.0/24 created
	I0325 01:56:33.801156  431164 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-20220325015306-262786" container
	I0325 01:56:33.801207  431164 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 01:56:33.833969  431164 cli_runner.go:133] Run: docker volume create old-k8s-version-20220325015306-262786 --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 01:56:33.863735  431164 oci.go:102] Successfully created a docker volume old-k8s-version-20220325015306-262786
	I0325 01:56:33.863800  431164 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20220325015306-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --entrypoint /usr/bin/test -v old-k8s-version-20220325015306-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 01:56:34.361286  431164 oci.go:106] Successfully prepared a docker volume old-k8s-version-20220325015306-262786
	I0325 01:56:34.361350  431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 01:56:34.361371  431164 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 01:56:34.361435  431164 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 01:56:43.174328  431164 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.812845537s)
	I0325 01:56:43.174371  431164 kic.go:188] duration metric: took 8.812995 seconds to extract preloaded images to volume
	W0325 01:56:43.174413  431164 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 01:56:43.174420  431164 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 01:56:43.174472  431164 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 01:56:43.265519  431164 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.76.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0325 01:56:43.664728  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Running}}
	I0325 01:56:43.700561  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:43.732786  431164 cli_runner.go:133] Run: docker exec old-k8s-version-20220325015306-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 01:56:43.800760  431164 oci.go:281] the created container "old-k8s-version-20220325015306-262786" has a running status.
	I0325 01:56:43.800796  431164 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa...
	I0325 01:56:43.897798  431164 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 01:56:44.005992  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:44.040565  431164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 01:56:44.040590  431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 01:56:44.141276  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:44.181329  431164 machine.go:88] provisioning docker machine ...
	I0325 01:56:44.181386  431164 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
	I0325 01:56:44.181456  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:44.218999  431164 main.go:130] libmachine: Using SSH client type: native
	I0325 01:56:44.219333  431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49539 <nil> <nil>}
	I0325 01:56:44.219364  431164 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
	I0325 01:56:44.346895  431164 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
	
	I0325 01:56:44.347002  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:44.378982  431164 main.go:130] libmachine: Using SSH client type: native
	I0325 01:56:44.379158  431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49539 <nil> <nil>}
	I0325 01:56:44.379177  431164 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 01:56:44.499114  431164 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 01:56:44.499153  431164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 01:56:44.499174  431164 ubuntu.go:177] setting up certificates
	I0325 01:56:44.499184  431164 provision.go:83] configureAuth start
	I0325 01:56:44.499239  431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 01:56:44.532553  431164 provision.go:138] copyHostCerts
	I0325 01:56:44.532637  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 01:56:44.532651  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 01:56:44.532750  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 01:56:44.532836  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 01:56:44.532855  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 01:56:44.532882  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 01:56:44.532930  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 01:56:44.532938  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 01:56:44.532957  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 01:56:44.532998  431164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
	I0325 01:56:44.716034  431164 provision.go:172] copyRemoteCerts
	I0325 01:56:44.716095  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 01:56:44.716131  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:44.750262  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:44.842652  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0325 01:56:44.860534  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 01:56:44.877456  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 01:56:44.894710  431164 provision.go:86] duration metric: configureAuth took 395.50834ms
	I0325 01:56:44.894744  431164 ubuntu.go:193] setting minikube options for container-runtime
	I0325 01:56:44.894925  431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:56:44.894941  431164 machine.go:91] provisioned docker machine in 713.577559ms
	I0325 01:56:44.894947  431164 client.go:171] LocalClient.Create took 11.257778857s
	I0325 01:56:44.894990  431164 start.go:169] duration metric: libmachine.API.Create for "old-k8s-version-20220325015306-262786" took 11.257879213s
	I0325 01:56:44.895011  431164 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
	I0325 01:56:44.895022  431164 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 01:56:44.895080  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 01:56:44.895130  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:44.927429  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:45.014679  431164 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 01:56:45.017487  431164 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 01:56:45.017516  431164 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 01:56:45.017525  431164 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 01:56:45.017530  431164 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 01:56:45.017538  431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 01:56:45.017604  431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 01:56:45.017669  431164 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 01:56:45.017744  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 01:56:45.024070  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:56:45.041483  431164 start.go:305] post-start completed in 146.454729ms
	I0325 01:56:45.041798  431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 01:56:45.076182  431164 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json ...
	I0325 01:56:45.076420  431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:56:45.076458  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:45.108209  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:45.195204  431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 01:56:45.198866  431164 start.go:130] duration metric: createHost completed in 11.565060546s
	I0325 01:56:45.198964  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	W0325 01:56:45.231974  431164 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 01:56:45.232009  431164 machine.go:88] provisioning docker machine ...
	I0325 01:56:45.232033  431164 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
	I0325 01:56:45.232086  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:45.262455  431164 main.go:130] libmachine: Using SSH client type: native
	I0325 01:56:45.262621  431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49539 <nil> <nil>}
	I0325 01:56:45.262636  431164 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
	I0325 01:56:45.386554  431164 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
	
	I0325 01:56:45.386637  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:45.419901  431164 main.go:130] libmachine: Using SSH client type: native
	I0325 01:56:45.420066  431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49539 <nil> <nil>}
	I0325 01:56:45.420098  431164 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 01:56:45.542421  431164 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 01:56:45.542450  431164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 01:56:45.542464  431164 ubuntu.go:177] setting up certificates
	I0325 01:56:45.542474  431164 provision.go:83] configureAuth start
	I0325 01:56:45.542517  431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 01:56:45.575074  431164 provision.go:138] copyHostCerts
	I0325 01:56:45.575139  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 01:56:45.575151  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 01:56:45.575209  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 01:56:45.575301  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 01:56:45.575311  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 01:56:45.575333  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 01:56:45.575380  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 01:56:45.575388  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 01:56:45.575407  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 01:56:45.575453  431164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
	I0325 01:56:45.699927  431164 provision.go:172] copyRemoteCerts
	I0325 01:56:45.699978  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 01:56:45.700008  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:45.732608  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.059471  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 01:56:46.077602  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0325 01:56:46.094741  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 01:56:46.111752  431164 provision.go:86] duration metric: configureAuth took 569.266891ms
	I0325 01:56:46.111780  431164 ubuntu.go:193] setting minikube options for container-runtime
	I0325 01:56:46.111953  431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:56:46.111967  431164 machine.go:91] provisioned docker machine in 879.950952ms
	I0325 01:56:46.111977  431164 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
	I0325 01:56:46.111985  431164 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 01:56:46.112037  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 01:56:46.112083  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:46.146009  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.238610  431164 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 01:56:46.241357  431164 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 01:56:46.241383  431164 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 01:56:46.241391  431164 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 01:56:46.241399  431164 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 01:56:46.241413  431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 01:56:46.241465  431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 01:56:46.241560  431164 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 01:56:46.241650  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 01:56:46.248459  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:56:46.265464  431164 start.go:305] post-start completed in 153.469791ms
	I0325 01:56:46.265532  431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:56:46.265573  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:46.297032  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.382984  431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 01:56:46.387252  431164 fix.go:57] fixHost completed within 3m17.71088257s
	I0325 01:56:46.387290  431164 start.go:81] releasing machines lock for "old-k8s-version-20220325015306-262786", held for 3m17.710952005s
	I0325 01:56:46.387387  431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 01:56:46.430623  431164 ssh_runner.go:195] Run: sudo service crio stop
	I0325 01:56:46.430668  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:46.430668  431164 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 01:56:46.430720  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:46.467539  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.469867  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.901923  431164 openrc.go:165] stop output: 
	I0325 01:56:46.901990  431164 ssh_runner.go:195] Run: sudo service crio status
	I0325 01:56:46.918929  431164 docker.go:183] disabling docker service ...
	I0325 01:56:46.918994  431164 ssh_runner.go:195] Run: sudo service docker.socket stop
	I0325 01:56:47.285757  431164 openrc.go:165] stop output: 
	** stderr ** 
	Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	
	** /stderr **
	E0325 01:56:47.285792  431164 docker.go:186] "Failed to stop" err=<
		sudo service docker.socket stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	 > service="docker.socket"
	I0325 01:56:47.285838  431164 ssh_runner.go:195] Run: sudo service docker.service stop
	I0325 01:56:47.649755  431164 openrc.go:165] stop output: 
	** stderr ** 
	Failed to stop docker.service.service: Unit docker.service.service not loaded.
	
	** /stderr **
	E0325 01:56:47.649784  431164 docker.go:189] "Failed to stop" err=<
		sudo service docker.service stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.service.service: Unit docker.service.service not loaded.
	 > service="docker.service"
	W0325 01:56:47.649796  431164 cruntime.go:283] disable failed: sudo service docker.service stop: Process exited with status 5
	stdout:
	
	stderr:
	Failed to stop docker.service.service: Unit docker.service.service not loaded.
	I0325 01:56:47.649838  431164 ssh_runner.go:195] Run: sudo service docker status
	W0325 01:56:47.664778  431164 containerd.go:244] disableOthers: Docker is still active
	I0325 01:56:47.664901  431164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 01:56:47.676728  431164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 01:56:47.689398  431164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 01:56:47.695491  431164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 01:56:47.701670  431164 ssh_runner.go:195] Run: sudo service containerd restart
	I0325 01:56:47.775876  431164 openrc.go:152] restart output: 
	I0325 01:56:47.775911  431164 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 01:56:47.775957  431164 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 01:56:47.780036  431164 start.go:462] Will wait 60s for crictl version
	I0325 01:56:47.780095  431164 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:56:47.808499  431164 retry.go:31] will retry after 8.009118606s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T01:56:47Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 01:56:55.819167  431164 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:56:55.842809  431164 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 01:56:55.842867  431164 ssh_runner.go:195] Run: containerd --version
	I0325 01:56:55.862493  431164 ssh_runner.go:195] Run: containerd --version
	I0325 01:56:55.885291  431164 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	I0325 01:56:55.885389  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:56:55.918381  431164 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0325 01:56:55.921728  431164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:56:55.933134  431164 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 01:56:55.933231  431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 01:56:55.933303  431164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:56:55.955768  431164 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:56:55.955788  431164 containerd.go:526] Images already preloaded, skipping extraction
	I0325 01:56:55.955828  431164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:56:55.979329  431164 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:56:55.979348  431164 cache_images.go:84] Images are preloaded, skipping loading
	I0325 01:56:55.979386  431164 ssh_runner.go:195] Run: sudo crictl info
	I0325 01:56:56.002748  431164 cni.go:93] Creating CNI manager for ""
	I0325 01:56:56.002768  431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 01:56:56.002779  431164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 01:56:56.002792  431164 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220325015306-262786 NodeName:old-k8s-version-20220325015306-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 01:56:56.002974  431164 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220325015306-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220325015306-262786
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 01:56:56.003083  431164 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220325015306-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 01:56:56.003141  431164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0325 01:56:56.009691  431164 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 01:56:56.009827  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0325 01:56:56.016464  431164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 01:56:56.028607  431164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 01:56:56.041034  431164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0325 01:56:56.052949  431164 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0325 01:56:56.064655  431164 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0325 01:56:56.077971  431164 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0325 01:56:56.080686  431164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:56:56.089291  431164 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786 for IP: 192.168.76.2
	I0325 01:56:56.089415  431164 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 01:56:56.089479  431164 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 01:56:56.089550  431164 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key
	I0325 01:56:56.089574  431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt with IP's: []
	I0325 01:56:56.173943  431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt ...
	I0325 01:56:56.173977  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt: {Name:mk49efef0712da8d212d4d9821e0f44d60c00474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.174212  431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key ...
	I0325 01:56:56.174231  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key: {Name:mk717fd0b3391f00b7d69817a759d1a2ba6569e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.174386  431164 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25
	I0325 01:56:56.174407  431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 01:56:56.553488  431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 ...
	I0325 01:56:56.553520  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25: {Name:mk0db50f453f850e6693f5f3251d591297fe24c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.553723  431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25 ...
	I0325 01:56:56.553738  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25: {Name:mk44b3f12e50b4c043237e17ee319a130c7e6799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.553849  431164 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt
	I0325 01:56:56.553904  431164 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key
	I0325 01:56:56.553946  431164 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key
	I0325 01:56:56.553962  431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt with IP's: []
	I0325 01:56:56.634118  431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt ...
	I0325 01:56:56.634144  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt: {Name:mk41a988659c1306ddd1bb6feb42c4fcbdf737c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.634328  431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key ...
	I0325 01:56:56.634387  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key: {Name:mk496346cb1866d19fd00f75f3dc225361dc4fcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.634593  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 01:56:56.634634  431164 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 01:56:56.634643  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 01:56:56.634663  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 01:56:56.634688  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 01:56:56.634714  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 01:56:56.634755  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:56:56.635301  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 01:56:56.653204  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 01:56:56.669615  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 01:56:56.686091  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0325 01:56:56.702278  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 01:56:56.718732  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 01:56:56.734704  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 01:56:56.751950  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 01:56:56.768370  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 01:56:56.785599  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 01:56:56.802704  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 01:56:56.818636  431164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 01:56:56.830434  431164 ssh_runner.go:195] Run: openssl version
	I0325 01:56:56.834834  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 01:56:56.841688  431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:56:56.844759  431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:56:56.844799  431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:56:56.849420  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 01:56:56.856216  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 01:56:56.863401  431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 01:56:56.866302  431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 01:56:56.866341  431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 01:56:56.871090  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 01:56:56.878141  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 01:56:56.885043  431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 01:56:56.887974  431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 01:56:56.888019  431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 01:56:56.892629  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 01:56:56.899573  431164 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:56:56.899669  431164 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 01:56:56.899700  431164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 01:56:56.924510  431164 cri.go:87] found id: ""
	I0325 01:56:56.924564  431164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 01:56:56.967274  431164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 01:56:56.974042  431164 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 01:56:56.974100  431164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 01:56:56.980509  431164 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 01:56:56.980549  431164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 01:56:57.342628  431164 out.go:203]   - Generating certificates and keys ...
	I0325 01:57:00.421358  431164 out.go:203]   - Booting up control plane ...
	I0325 01:57:10.462463  431164 out.go:203]   - Configuring RBAC rules ...
	I0325 01:57:10.884078  431164 cni.go:93] Creating CNI manager for ""
	I0325 01:57:10.884101  431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 01:57:10.885886  431164 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 01:57:10.885957  431164 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 01:57:10.889349  431164 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0325 01:57:10.889369  431164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 01:57:10.902215  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 01:57:11.219931  431164 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 01:57:11.220013  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:11.220072  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=old-k8s-version-20220325015306-262786 minikube.k8s.io/updated_at=2022_03_25T01_57_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:11.227208  431164 ops.go:34] apiserver oom_adj: -16
	I0325 01:57:11.318580  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:11.897565  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:12.397150  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:12.897044  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:13.397714  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:13.897135  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:14.396784  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:14.897509  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:15.397532  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:15.897241  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:16.397418  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:16.897298  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:17.397490  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:17.896851  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:18.396958  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:18.897528  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:19.397449  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:19.896818  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:20.396950  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:20.897730  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:21.397699  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:21.897770  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:22.397129  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:22.897777  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:23.396809  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:23.897374  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:24.396808  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:24.897374  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:25.397510  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:25.465074  431164 kubeadm.go:1020] duration metric: took 14.245126743s to wait for elevateKubeSystemPrivileges.
	I0325 01:57:25.465105  431164 kubeadm.go:393] StartCluster complete in 28.565542464s
	I0325 01:57:25.465127  431164 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:57:25.465222  431164 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:57:25.466826  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:57:25.982566  431164 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220325015306-262786" rescaled to 1
	I0325 01:57:25.982642  431164 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:57:25.985735  431164 out.go:176] * Verifying Kubernetes components...
	I0325 01:57:25.982729  431164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 01:57:25.985818  431164 ssh_runner.go:195] Run: sudo service kubelet status
	I0325 01:57:25.982734  431164 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 01:57:25.982930  431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:57:25.985917  431164 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220325015306-262786"
	I0325 01:57:25.985938  431164 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220325015306-262786"
	W0325 01:57:25.985944  431164 addons.go:165] addon storage-provisioner should already be in state true
	I0325 01:57:25.985974  431164 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
	I0325 01:57:25.987026  431164 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220325015306-262786"
	I0325 01:57:25.987059  431164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220325015306-262786"
	I0325 01:57:25.987464  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:57:25.987734  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:57:26.043330  431164 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 01:57:26.041809  431164 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220325015306-262786"
	W0325 01:57:26.043448  431164 addons.go:165] addon default-storageclass should already be in state true
	I0325 01:57:26.043461  431164 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:57:26.043473  431164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 01:57:26.043499  431164 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
	I0325 01:57:26.043528  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:57:26.043990  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:57:26.079480  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:57:26.080003  431164 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 01:57:26.080025  431164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 01:57:26.080072  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:57:26.123901  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:57:26.130675  431164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 01:57:26.132207  431164 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 01:57:26.203910  431164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:57:26.305985  431164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 01:57:26.701311  431164 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0325 01:57:26.884863  431164 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 01:57:26.884915  431164 addons.go:417] enableAddons completed in 902.209882ms
	I0325 01:57:28.137240  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:30.137382  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:32.137902  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:34.636994  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:36.637231  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:38.637618  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:41.138151  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:43.637420  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:46.137000  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:48.137252  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:50.137524  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:52.638010  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:55.137979  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:57.637645  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:00.137151  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:02.137531  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:04.137755  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:06.637823  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:09.137247  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:11.137649  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:13.138175  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:15.637967  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:18.137346  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:20.137621  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:22.138039  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:24.637505  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:26.637944  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:28.638663  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:31.137778  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:33.137957  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:35.637360  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:37.637456  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:40.137522  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:42.637830  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:44.638149  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:47.137013  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:49.137465  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:51.137831  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:53.138061  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:55.637301  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:57.637937  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:00.137993  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:02.138041  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:04.138262  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:06.637907  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:09.139879  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:11.637442  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:13.637538  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:15.639122  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:18.137261  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:20.137829  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:22.637466  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:24.637948  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:27.137486  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:29.137528  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:31.137566  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:33.138065  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:35.637535  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:37.637991  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:39.638114  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:42.137688  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:44.637241  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:46.637686  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:49.137625  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:51.638236  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:54.137670  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:56.138392  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:58.637751  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:00.638089  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:03.137541  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:05.637552  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:08.137145  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:10.137534  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:12.637732  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:15.138150  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:17.637995  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:20.137994  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:22.637195  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:24.638276  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:27.137477  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:29.138059  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:31.138114  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:33.637955  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:35.638305  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:38.137342  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:40.138018  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:42.638060  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:45.137181  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:47.137290  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:49.137908  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:51.638340  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:54.137713  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:56.637016  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:58.637267  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:00.637464  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:02.638041  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:05.137294  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:07.137350  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:09.137969  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:11.638005  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:14.137955  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:16.637434  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:18.637978  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:21.137203  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:23.137475  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:25.137628  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:26.139331  431164 node_ready.go:38] duration metric: took 4m0.007092133s waiting for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 02:01:26.141382  431164 out.go:176] 
	W0325 02:01:26.141510  431164 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:01:26.141527  431164 out.go:241] * 
	* 
	W0325 02:01:26.142250  431164 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:01:26.143976  431164 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220325015306-262786
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220325015306-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b",
	        "Created": "2022-03-25T01:56:43.297059247Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 457693,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T01:56:43.655669688Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hosts",
	        "LogPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b-json.log",
	        "Name": "/old-k8s-version-20220325015306-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220325015306-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220325015306-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220325015306-262786",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220325015306-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220325015306-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44b9519d0b55a0dbe9bc349c627da03ca1d456aab29fe1f9cc6fbe902a60b4e0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49539"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49535"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49537"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49536"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44b9519d0b55",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220325015306-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e6a4c0e8f4c7",
	                        "old-k8s-version-20220325015306-262786"
	                    ],
	                    "NetworkID": "739cf1dc095b5d758dfcb21f6f999d4a170c6b33046de4a26204586f05d2d4a4",
	                    "EndpointID": "f17636c1e1855543cb0356e0ced5eac0102a5fed579cb886a1c3e850498bc7d7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220325015306-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                       | missing-upgrade-20220325014930-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:50:33 UTC | Fri, 25 Mar 2022 01:51:18 UTC |
	|         | missing-upgrade-20220325014930-262786    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20220325014930-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:51:18 UTC | Fri, 25 Mar 2022 01:51:21 UTC |
	|         | missing-upgrade-20220325014930-262786    |                                          |         |         |                               |                               |
	| start   | -p pause-20220325015121-262786           | pause-20220325015121-262786              | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:51:21 UTC | Fri, 25 Mar 2022 01:52:32 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20220325015121-262786           | pause-20220325015121-262786              | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:32 UTC | Fri, 25 Mar 2022 01:52:47 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| pause   | -p pause-20220325015121-262786           | pause-20220325015121-262786              | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:48 UTC | Fri, 25 Mar 2022 01:52:48 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| unpause | -p pause-20220325015121-262786           | pause-20220325015121-262786              | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:49 UTC | Fri, 25 Mar 2022 01:52:49 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20220325015003-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:51:06 UTC | Fri, 25 Mar 2022 01:52:50 UTC |
	|         | kubernetes-upgrade-20220325015003-262786 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | cert-expiration-20220325014851-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:36 UTC | Fri, 25 Mar 2022 01:52:51 UTC |
	|         | cert-expiration-20220325014851-262786    |                                          |         |         |                               |                               |
	|         | --memory=2048 --cert-expiration=8760h    |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | cert-expiration-20220325014851-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:51 UTC | Fri, 25 Mar 2022 01:52:54 UTC |
	|         | cert-expiration-20220325014851-262786    |                                          |         |         |                               |                               |
	| pause   | -p pause-20220325015121-262786           | pause-20220325015121-262786              | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:49 UTC | Fri, 25 Mar 2022 01:52:55 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| delete  | -p pause-20220325015121-262786           | pause-20220325015121-262786              | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:55 UTC | Fri, 25 Mar 2022 01:53:05 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20220325015003-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:50 UTC | Fri, 25 Mar 2022 01:53:05 UTC |
	|         | kubernetes-upgrade-20220325015003-262786 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| profile | list --output json                       | minikube                                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:05 UTC | Fri, 25 Mar 2022 01:53:05 UTC |
	| delete  | -p pause-20220325015121-262786           | pause-20220325015121-262786              | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:06 UTC | Fri, 25 Mar 2022 01:53:06 UTC |
	| delete  | -p                                       | kubernetes-upgrade-20220325015003-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:05 UTC | Fri, 25 Mar 2022 01:53:09 UTC |
	|         | kubernetes-upgrade-20220325015003-262786 |                                          |         |         |                               |                               |
	| start   | -p auto-20220325014919-262786            | auto-20220325014919-262786               | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:54 UTC | Fri, 25 Mar 2022 01:53:54 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p auto-20220325014919-262786            | auto-20220325014919-262786               | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:54 UTC | Fri, 25 Mar 2022 01:53:54 UTC |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p auto-20220325014919-262786            | auto-20220325014919-262786               | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:05 UTC | Fri, 25 Mar 2022 01:54:08 UTC |
	| start   | -p                                       | running-upgrade-20220325014921-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:37 UTC | Fri, 25 Mar 2022 01:54:11 UTC |
	|         | running-upgrade-20220325014921-262786    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | running-upgrade-20220325014921-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:11 UTC | Fri, 25 Mar 2022 01:54:22 UTC |
	|         | running-upgrade-20220325014921-262786    |                                          |         |         |                               |                               |
	| start   | -p                                       | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:09 UTC | Fri, 25 Mar 2022 01:54:40 UTC |
	|         | cilium-20220325014921-262786             |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=cilium --driver=docker             |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:45 UTC | Fri, 25 Mar 2022 01:54:45 UTC |
	|         | cilium-20220325014921-262786             |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p                                       | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:57 UTC | Fri, 25 Mar 2022 01:55:00 UTC |
	|         | cilium-20220325014921-262786             |                                          |         |         |                               |                               |
	| start   | -p                                       | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:55:00 UTC | Fri, 25 Mar 2022 01:56:12 UTC |
	|         | kindnet-20220325014920-262786            |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=kindnet --driver=docker            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:56:17 UTC | Fri, 25 Mar 2022 01:56:17 UTC |
	|         | kindnet-20220325014920-262786            |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 01:55:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 01:55:00.714916  449514 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:55:00.715078  449514 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:55:00.715089  449514 out.go:310] Setting ErrFile to fd 2...
	I0325 01:55:00.715093  449514 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:55:00.715209  449514 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:55:00.715486  449514 out.go:304] Setting JSON to false
	I0325 01:55:00.717003  449514 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16373,"bootTime":1648156928,"procs":672,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:55:00.717080  449514 start.go:125] virtualization: kvm guest
	I0325 01:55:00.720007  449514 out.go:176] * [kindnet-20220325014920-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 01:55:00.721607  449514 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 01:55:00.720213  449514 notify.go:193] Checking for updates...
	I0325 01:55:00.723033  449514 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 01:55:00.724516  449514 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:55:00.725923  449514 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 01:55:00.727325  449514 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 01:55:00.727846  449514 config.go:176] Loaded profile config "calico-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:55:00.727961  449514 config.go:176] Loaded profile config "custom-weave-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:55:00.728092  449514 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:55:00.728148  449514 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 01:55:00.774853  449514 docker.go:136] docker version: linux-20.10.14
	I0325 01:55:00.775028  449514 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:55:00.877241  449514 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 01:55:00.807852165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:55:00.877362  449514 docker.go:253] overlay module found
	I0325 01:55:00.879932  449514 out.go:176] * Using the docker driver based on user configuration
	I0325 01:55:00.879963  449514 start.go:284] selected driver: docker
	I0325 01:55:00.879968  449514 start.go:801] validating driver "docker" against <nil>
	I0325 01:55:00.879986  449514 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 01:55:00.880043  449514 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 01:55:00.880063  449514 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 01:55:00.881696  449514 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 01:55:00.882284  449514 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:55:00.986213  449514 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 01:55:00.913247272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:55:00.986345  449514 start_flags.go:290] no existing cluster config was found, will generate one from the flags 
	I0325 01:55:00.986546  449514 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 01:55:00.986577  449514 cni.go:93] Creating CNI manager for "kindnet"
	I0325 01:55:00.986588  449514 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 01:55:00.986599  449514 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 01:55:00.986604  449514 start_flags.go:299] Found "CNI" CNI - setting NetworkPlugin=cni
	I0325 01:55:00.986614  449514 start_flags.go:304] config:
	{Name:kindnet-20220325014920-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220325014920-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:55:00.989926  449514 out.go:176] * Starting control plane node kindnet-20220325014920-262786 in cluster kindnet-20220325014920-262786
	I0325 01:55:00.989961  449514 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 01:55:00.991465  449514 out.go:176] * Pulling base image ...
	I0325 01:55:00.991495  449514 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:55:00.991520  449514 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 01:55:00.991532  449514 cache.go:57] Caching tarball of preloaded images
	I0325 01:55:00.991588  449514 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 01:55:00.991753  449514 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 01:55:00.991772  449514 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 01:55:00.991875  449514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/config.json ...
	I0325 01:55:00.991911  449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/config.json: {Name:mk363c00d135004479b2648b7f626008aacd2fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:01.026713  449514 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 01:55:01.026749  449514 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 01:55:01.026766  449514 cache.go:208] Successfully downloaded all kic artifacts
	I0325 01:55:01.026808  449514 start.go:348] acquiring machines lock for kindnet-20220325014920-262786: {Name:mka5ea64952550618d6576e44be996cc56d8d8bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 01:55:01.026939  449514 start.go:352] acquired machines lock for "kindnet-20220325014920-262786" in 109.57µs
	I0325 01:55:01.026980  449514 start.go:90] Provisioning new machine with config: &{Name:kindnet-20220325014920-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220325014920-262786 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:55:01.027079  449514 start.go:127] createHost starting for "" (driver="docker")
	I0325 01:54:57.236361  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:00.275444  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:54:58.973463  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:01.450813  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:04.932050  442784 out.go:203]   - Generating certificates and keys ...
	I0325 01:55:04.936166  442784 out.go:203]   - Booting up control plane ...
	I0325 01:55:04.939871  442784 out.go:203]   - Configuring RBAC rules ...
	I0325 01:55:04.942250  442784 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0325 01:55:01.029656  449514 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0325 01:55:01.029913  449514 start.go:161] libmachine.API.Create for "kindnet-20220325014920-262786" (driver="docker")
	I0325 01:55:01.029948  449514 client.go:168] LocalClient.Create starting
	I0325 01:55:01.030015  449514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 01:55:01.030070  449514 main.go:130] libmachine: Decoding PEM data...
	I0325 01:55:01.030087  449514 main.go:130] libmachine: Parsing certificate...
	I0325 01:55:01.030122  449514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 01:55:01.030139  449514 main.go:130] libmachine: Decoding PEM data...
	I0325 01:55:01.030147  449514 main.go:130] libmachine: Parsing certificate...
	I0325 01:55:01.030465  449514 cli_runner.go:133] Run: docker network inspect kindnet-20220325014920-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 01:55:01.062523  449514 cli_runner.go:180] docker network inspect kindnet-20220325014920-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 01:55:01.062661  449514 network_create.go:254] running [docker network inspect kindnet-20220325014920-262786] to gather additional debugging logs...
	I0325 01:55:01.062707  449514 cli_runner.go:133] Run: docker network inspect kindnet-20220325014920-262786
	W0325 01:55:01.097010  449514 cli_runner.go:180] docker network inspect kindnet-20220325014920-262786 returned with exit code 1
	I0325 01:55:01.097062  449514 network_create.go:257] error running [docker network inspect kindnet-20220325014920-262786]: docker network inspect kindnet-20220325014920-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220325014920-262786
	I0325 01:55:01.097128  449514 network_create.go:259] output of [docker network inspect kindnet-20220325014920-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220325014920-262786
	
	** /stderr **
	I0325 01:55:01.097193  449514 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:55:01.135618  449514 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcb21d43dbbf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:db:45:ae:c5}}
	I0325 01:55:01.136762  449514 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0005880b0] misses:0}
	I0325 01:55:01.136837  449514 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 01:55:01.136861  449514 network_create.go:106] attempt to create docker network kindnet-20220325014920-262786 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0325 01:55:01.136925  449514 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220325014920-262786
	I0325 01:55:01.205843  449514 network_create.go:90] docker network kindnet-20220325014920-262786 192.168.58.0/24 created
	I0325 01:55:01.205880  449514 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20220325014920-262786" container
	I0325 01:55:01.205973  449514 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 01:55:01.239559  449514 cli_runner.go:133] Run: docker volume create kindnet-20220325014920-262786 --label name.minikube.sigs.k8s.io=kindnet-20220325014920-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 01:55:01.271714  449514 oci.go:102] Successfully created a docker volume kindnet-20220325014920-262786
	I0325 01:55:01.271799  449514 cli_runner.go:133] Run: docker run --rm --name kindnet-20220325014920-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220325014920-262786 --entrypoint /usr/bin/test -v kindnet-20220325014920-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 01:55:01.850347  449514 oci.go:106] Successfully prepared a docker volume kindnet-20220325014920-262786
	I0325 01:55:01.850392  449514 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:55:01.850418  449514 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 01:55:01.850497  449514 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220325014920-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 01:55:03.334100  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:06.368209  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:04.944398  442784 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0325 01:55:04.944466  442784 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 01:55:04.944515  442784 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0325 01:55:04.948418  442784 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0325 01:55:04.948452  442784 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0325 01:55:04.974787  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 01:55:05.905700  442784 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 01:55:05.905784  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:05.905796  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=custom-weave-20220325014921-262786 minikube.k8s.io/updated_at=2022_03_25T01_55_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:05.913279  442784 ops.go:34] apiserver oom_adj: -16
	I0325 01:55:06.350587  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:06.946231  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:07.446253  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:03.951437  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:06.450673  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:09.407114  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:07.946635  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:08.446097  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:08.945803  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:09.446739  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:09.946451  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:10.445869  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:10.945907  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:11.446001  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:11.946712  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:12.446459  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:08.791543  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:10.950007  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:12.950870  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:12.256742  449514 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220325014920-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (10.406200607s)
	I0325 01:55:12.256780  449514 kic.go:188] duration metric: took 10.406356 seconds to extract preloaded images to volume
	W0325 01:55:12.256859  449514 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 01:55:12.256876  449514 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 01:55:12.256928  449514 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 01:55:12.350466  449514 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220325014920-262786 --name kindnet-20220325014920-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220325014920-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220325014920-262786 --network kindnet-20220325014920-262786 --ip 192.168.58.2 --volume kindnet-20220325014920-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0325 01:55:12.757384  449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Running}}
	I0325 01:55:12.792632  449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
	I0325 01:55:12.825757  449514 cli_runner.go:133] Run: docker exec kindnet-20220325014920-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 01:55:12.888453  449514 oci.go:281] the created container "kindnet-20220325014920-262786" has a running status.
	I0325 01:55:12.888494  449514 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa...
	I0325 01:55:13.118673  449514 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 01:55:13.204377  449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
	I0325 01:55:13.238141  449514 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 01:55:13.238171  449514 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220325014920-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 01:55:13.326830  449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
	I0325 01:55:13.361589  449514 machine.go:88] provisioning docker machine ...
	I0325 01:55:13.361634  449514 ubuntu.go:169] provisioning hostname "kindnet-20220325014920-262786"
	I0325 01:55:13.361706  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:13.391728  449514 main.go:130] libmachine: Using SSH client type: native
	I0325 01:55:13.391972  449514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49534 <nil> <nil>}
	I0325 01:55:13.391997  449514 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20220325014920-262786 && echo "kindnet-20220325014920-262786" | sudo tee /etc/hostname
	I0325 01:55:13.521224  449514 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20220325014920-262786
	
	I0325 01:55:13.521306  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:13.552907  449514 main.go:130] libmachine: Using SSH client type: native
	I0325 01:55:13.553068  449514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49534 <nil> <nil>}
	I0325 01:55:13.553097  449514 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220325014920-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220325014920-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220325014920-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 01:55:13.670815  449514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 01:55:13.670849  449514 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 01:55:13.670880  449514 ubuntu.go:177] setting up certificates
	I0325 01:55:13.670894  449514 provision.go:83] configureAuth start
	I0325 01:55:13.670975  449514 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220325014920-262786
	I0325 01:55:13.702037  449514 provision.go:138] copyHostCerts
	I0325 01:55:13.702103  449514 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 01:55:13.702115  449514 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 01:55:13.702173  449514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 01:55:13.702265  449514 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 01:55:13.702275  449514 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 01:55:13.702300  449514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 01:55:13.702357  449514 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 01:55:13.702364  449514 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 01:55:13.702384  449514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 01:55:13.702428  449514 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220325014920-262786 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220325014920-262786]
	I0325 01:55:13.877542  449514 provision.go:172] copyRemoteCerts
	I0325 01:55:13.877598  449514 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 01:55:13.877633  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:13.910635  449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
	I0325 01:55:13.998738  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 01:55:14.016410  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0325 01:55:14.033193  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 01:55:14.049802  449514 provision.go:86] duration metric: configureAuth took 378.8914ms
	I0325 01:55:14.049826  449514 ubuntu.go:193] setting minikube options for container-runtime
	I0325 01:55:14.050001  449514 config.go:176] Loaded profile config "kindnet-20220325014920-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:55:14.050015  449514 machine.go:91] provisioned docker machine in 688.400649ms
	I0325 01:55:14.050021  449514 client.go:171] LocalClient.Create took 13.020061955s
	I0325 01:55:14.050037  449514 start.go:169] duration metric: libmachine.API.Create for "kindnet-20220325014920-262786" took 13.020125504s
	I0325 01:55:14.050044  449514 start.go:302] post-start starting for "kindnet-20220325014920-262786" (driver="docker")
	I0325 01:55:14.050050  449514 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 01:55:14.050113  449514 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 01:55:14.050160  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:14.083492  449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
	I0325 01:55:14.174183  449514 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 01:55:14.176870  449514 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 01:55:14.176891  449514 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 01:55:14.176901  449514 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 01:55:14.176908  449514 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 01:55:14.176918  449514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 01:55:14.176964  449514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 01:55:14.177026  449514 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 01:55:14.177106  449514 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 01:55:14.183578  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:55:14.200482  449514 start.go:305] post-start completed in 150.428094ms
	I0325 01:55:14.200771  449514 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220325014920-262786
	I0325 01:55:14.232519  449514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/config.json ...
	I0325 01:55:14.232743  449514 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:55:14.232798  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:14.266097  449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
	I0325 01:55:14.350928  449514 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 01:55:14.354518  449514 start.go:130] duration metric: createHost completed in 13.327425814s
	I0325 01:55:14.354543  449514 start.go:81] releasing machines lock for "kindnet-20220325014920-262786", held for 13.327579886s
	I0325 01:55:14.354616  449514 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220325014920-262786
	I0325 01:55:14.384844  449514 ssh_runner.go:195] Run: systemctl --version
	I0325 01:55:14.384877  449514 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 01:55:14.384893  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:14.384926  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:14.416087  449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
	I0325 01:55:14.417099  449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
	I0325 01:55:14.518639  449514 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 01:55:14.529395  449514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 01:55:14.537892  449514 docker.go:183] disabling docker service ...
	I0325 01:55:14.537955  449514 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 01:55:14.553167  449514 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 01:55:14.561708  449514 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 01:55:14.639668  449514 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 01:55:14.717986  449514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 01:55:14.727173  449514 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 01:55:14.739685  449514 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 01:55:14.752662  449514 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 01:55:14.758738  449514 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 01:55:14.764818  449514 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 01:55:14.834079  449514 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 01:55:14.897100  449514 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 01:55:14.897174  449514 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 01:55:14.901028  449514 start.go:462] Will wait 60s for crictl version
	I0325 01:55:14.901085  449514 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:55:14.923419  449514 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T01:55:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 01:55:12.439881  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:15.478625  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:12.945864  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:13.445997  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:13.945906  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:14.446100  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:14.946258  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:15.446416  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:15.946743  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:16.445790  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:16.946757  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:17.445866  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:14.951063  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:17.449838  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:17.945939  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:18.446030  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:18.946679  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:19.445913  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:19.946715  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:20.003561  442784 kubeadm.go:1020] duration metric: took 14.097833739s to wait for elevateKubeSystemPrivileges.
	I0325 01:55:20.003591  442784 kubeadm.go:393] StartCluster complete in 31.384120335s
	I0325 01:55:20.003609  442784 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:20.003709  442784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:55:20.004656  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:20.519233  442784 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220325014921-262786" rescaled to 1
	I0325 01:55:20.519302  442784 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:55:20.521500  442784 out.go:176] * Verifying Kubernetes components...
	I0325 01:55:20.519380  442784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 01:55:20.521565  442784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 01:55:20.519622  442784 config.go:176] Loaded profile config "custom-weave-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:55:20.519397  442784 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 01:55:20.521700  442784 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220325014921-262786"
	I0325 01:55:20.521715  442784 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220325014921-262786"
	W0325 01:55:20.521720  442784 addons.go:165] addon storage-provisioner should already be in state true
	I0325 01:55:20.521747  442784 host.go:66] Checking if "custom-weave-20220325014921-262786" exists ...
	I0325 01:55:20.522052  442784 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220325014921-262786"
	I0325 01:55:20.522077  442784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220325014921-262786"
	I0325 01:55:20.522294  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:55:20.522407  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:55:20.564684  442784 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220325014921-262786"
	W0325 01:55:20.564711  442784 addons.go:165] addon default-storageclass should already be in state true
	I0325 01:55:20.564734  442784 host.go:66] Checking if "custom-weave-20220325014921-262786" exists ...
	I0325 01:55:20.565142  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:55:20.567523  442784 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 01:55:20.567643  442784 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:55:20.567660  442784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 01:55:20.567697  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:55:20.601389  442784 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 01:55:20.601417  442784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 01:55:20.601486  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:55:20.603466  442784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 01:55:20.603729  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:55:20.604592  442784 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220325014921-262786" to be "Ready" ...
	I0325 01:55:20.607815  442784 node_ready.go:49] node "custom-weave-20220325014921-262786" has status "Ready":"True"
	I0325 01:55:20.607832  442784 node_ready.go:38] duration metric: took 3.210454ms waiting for node "custom-weave-20220325014921-262786" to be "Ready" ...
	I0325 01:55:20.607840  442784 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 01:55:20.616184  442784 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-qsk2c" in "kube-system" namespace to be "Ready" ...
	I0325 01:55:20.644527  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:55:20.708612  442784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:55:20.800646  442784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 01:55:20.994686  442784 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0325 01:55:18.513302  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:21.545857  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:21.217641  442784 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 01:55:21.217669  442784 addons.go:417] enableAddons completed in 698.281954ms
	I0325 01:55:19.449885  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:21.450689  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:23.450820  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:25.971042  449514 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:55:25.993240  449514 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 01:55:25.993303  449514 ssh_runner.go:195] Run: containerd --version
	I0325 01:55:26.013085  449514 ssh_runner.go:195] Run: containerd --version
	I0325 01:55:26.035628  449514 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 01:55:26.035700  449514 cli_runner.go:133] Run: docker network inspect kindnet-20220325014920-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:55:26.065833  449514 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0325 01:55:26.069174  449514 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:55:24.580164  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:22.627827  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:25.126871  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:27.128372  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:25.949935  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:27.951276  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:26.081213  449514 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 01:55:26.081289  449514 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:55:26.081340  449514 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:55:26.104340  449514 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:55:26.104359  449514 containerd.go:526] Images already preloaded, skipping extraction
	I0325 01:55:26.104399  449514 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:55:26.126228  449514 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:55:26.126252  449514 cache_images.go:84] Images are preloaded, skipping loading
	I0325 01:55:26.126297  449514 ssh_runner.go:195] Run: sudo crictl info
	I0325 01:55:26.148746  449514 cni.go:93] Creating CNI manager for "kindnet"
	I0325 01:55:26.148784  449514 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 01:55:26.148799  449514 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220325014920-262786 NodeName:kindnet-20220325014920-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 01:55:26.148909  449514 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kindnet-20220325014920-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 01:55:26.149005  449514 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kindnet-20220325014920-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220325014920-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0325 01:55:26.149052  449514 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 01:55:26.155854  449514 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 01:55:26.155931  449514 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 01:55:26.162631  449514 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
	I0325 01:55:26.174566  449514 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 01:55:26.186489  449514 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2058 bytes)
	I0325 01:55:26.197829  449514 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0325 01:55:26.200723  449514 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:55:26.209968  449514 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786 for IP: 192.168.58.2
	I0325 01:55:26.210075  449514 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 01:55:26.210119  449514 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 01:55:26.210173  449514 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.key
	I0325 01:55:26.210194  449514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt with IP's: []
	I0325 01:55:26.417902  449514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt ...
	I0325 01:55:26.417934  449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: {Name:mke482e0d3615d15f8a0e1ec3f80257bfc97c4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:26.418143  449514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.key ...
	I0325 01:55:26.418162  449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.key: {Name:mkff32edfcc4c2eb707360d40e7c1afa06b2c230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:26.418268  449514 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key.cee25041
	I0325 01:55:26.418285  449514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 01:55:26.493489  449514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt.cee25041 ...
	I0325 01:55:26.493515  449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt.cee25041: {Name:mk14b16498dbc281b3740ba71b4be7d62b8bbe5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:26.493692  449514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key.cee25041 ...
	I0325 01:55:26.493709  449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key.cee25041: {Name:mk3793378151f91dd4d340d5bd722f9a7a907533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:26.493822  449514 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt
	I0325 01:55:26.493884  449514 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key
	I0325 01:55:26.493934  449514 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.key
	I0325 01:55:26.493947  449514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.crt with IP's: []
	I0325 01:55:26.560393  449514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.crt ...
	I0325 01:55:26.560416  449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.crt: {Name:mk572877cd2b71a469f8f7fe55a734caed58088c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:26.560613  449514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.key ...
	I0325 01:55:26.560632  449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.key: {Name:mk8b789e0b8f5b106866cbdd103abe48fba916ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:26.560833  449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 01:55:26.560869  449514 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 01:55:26.560882  449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 01:55:26.560906  449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 01:55:26.560931  449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 01:55:26.560953  449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 01:55:26.560998  449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:55:26.561499  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 01:55:26.578805  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 01:55:26.596151  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 01:55:26.612490  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0325 01:55:26.629157  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 01:55:26.645606  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 01:55:26.661763  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 01:55:26.677567  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 01:55:26.693482  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 01:55:26.709418  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 01:55:26.725313  449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 01:55:26.741234  449514 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 01:55:26.753762  449514 ssh_runner.go:195] Run: openssl version
	I0325 01:55:26.758450  449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 01:55:26.765176  449514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 01:55:26.768085  449514 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 01:55:26.768131  449514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 01:55:26.772576  449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 01:55:26.779292  449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 01:55:26.786531  449514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:55:26.789301  449514 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:55:26.789346  449514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:55:26.793813  449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 01:55:26.800578  449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 01:55:26.807538  449514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 01:55:26.810562  449514 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 01:55:26.810598  449514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 01:55:26.815393  449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 01:55:26.822494  449514 kubeadm.go:391] StartCluster: {Name:kindnet-20220325014920-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220325014920-262786 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:55:26.822582  449514 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 01:55:26.822646  449514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 01:55:26.846017  449514 cri.go:87] found id: ""
	I0325 01:55:26.846098  449514 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 01:55:26.853346  449514 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 01:55:26.860032  449514 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 01:55:26.860085  449514 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 01:55:26.866535  449514 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 01:55:26.866603  449514 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 01:55:27.612926  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:30.649664  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:29.128431  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:31.627540  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:30.450113  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:32.950401  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:33.683100  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:36.715325  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:34.127415  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:36.128099  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:35.450311  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:37.950414  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:39.751091  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:42.288097  449514 out.go:203]   - Generating certificates and keys ...
	I0325 01:55:42.291193  449514 out.go:203]   - Booting up control plane ...
	I0325 01:55:42.294032  449514 out.go:203]   - Configuring RBAC rules ...
	I0325 01:55:42.295734  449514 cni.go:93] Creating CNI manager for "kindnet"
	I0325 01:55:38.627308  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:40.627890  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:39.950851  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:42.450079  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:42.297463  449514 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 01:55:42.297522  449514 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 01:55:42.300986  449514 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 01:55:42.301002  449514 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 01:55:42.313726  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 01:55:43.040679  449514 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 01:55:43.040759  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:43.040761  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=kindnet-20220325014920-262786 minikube.k8s.io/updated_at=2022_03_25T01_55_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:43.047379  449514 ops.go:34] apiserver oom_adj: -16
	I0325 01:55:43.109381  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:43.663589  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:44.163108  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:44.663068  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:45.163910  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:45.663089  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:42.785301  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:45.821589  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:43.127776  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:45.128206  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:44.949971  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:46.950388  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:46.163118  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:46.663747  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:47.163779  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:47.663912  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:48.163060  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:48.663802  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:49.163741  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:49.663849  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:50.163061  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:50.663061  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:48.854113  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:47.626880  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:49.627246  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:51.628191  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:49.450803  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:51.950005  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:51.163949  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:51.663251  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:52.163124  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:52.663755  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:53.163330  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:53.663117  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:54.163827  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:54.663886  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:55.164040  449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:55.230421  449514 kubeadm.go:1020] duration metric: took 12.189718321s to wait for elevateKubeSystemPrivileges.
	I0325 01:55:55.230459  449514 kubeadm.go:393] StartCluster complete in 28.407973168s
	I0325 01:55:55.230497  449514 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:55.230587  449514 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:55:55.231954  449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:55.747834  449514 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220325014920-262786" rescaled to 1
	I0325 01:55:55.747892  449514 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:55:55.749840  449514 out.go:176] * Verifying Kubernetes components...
	I0325 01:55:55.747953  449514 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 01:55:55.747943  449514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 01:55:55.748168  449514 config.go:176] Loaded profile config "kindnet-20220325014920-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:55:55.749922  449514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 01:55:55.749957  449514 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220325014920-262786"
	I0325 01:55:55.749991  449514 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220325014920-262786"
	W0325 01:55:55.750003  449514 addons.go:165] addon storage-provisioner should already be in state true
	I0325 01:55:55.750037  449514 host.go:66] Checking if "kindnet-20220325014920-262786" exists ...
	I0325 01:55:55.749959  449514 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220325014920-262786"
	I0325 01:55:55.750175  449514 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220325014920-262786"
	I0325 01:55:55.750481  449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
	I0325 01:55:55.750645  449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
	I0325 01:55:55.764770  449514 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220325014920-262786" to be "Ready" ...
	I0325 01:55:55.796467  449514 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 01:55:55.796598  449514 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:55:55.796603  449514 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220325014920-262786"
	W0325 01:55:55.796627  449514 addons.go:165] addon default-storageclass should already be in state true
	I0325 01:55:55.796658  449514 host.go:66] Checking if "kindnet-20220325014920-262786" exists ...
	I0325 01:55:55.796616  449514 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 01:55:55.796748  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:55.797084  449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
	I0325 01:55:55.823354  449514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 01:55:55.838360  449514 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 01:55:55.838391  449514 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 01:55:55.838451  449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
	I0325 01:55:55.841405  449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
	I0325 01:55:55.871252  449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
	I0325 01:55:56.041026  449514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:55:56.089799  449514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 01:55:56.108798  449514 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0325 01:55:51.885844  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:54.919097  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:54.129214  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:56.129479  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:54.450376  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:56.949706  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:56.330095  449514 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 01:55:56.330118  449514 addons.go:417] enableAddons completed in 582.178132ms
	I0325 01:55:57.771713  449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
	I0325 01:55:59.772257  449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
	I0325 01:55:57.951168  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:00.986964  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:55:58.627187  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:00.627532  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:59.450129  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:01.450808  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:02.272163  449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
	I0325 01:56:04.272264  449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
	I0325 01:56:04.022375  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:02.627738  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:05.126638  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:07.126933  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:03.950079  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:06.450402  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:08.450482  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:06.771583  449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
	I0325 01:56:07.771772  449514 node_ready.go:49] node "kindnet-20220325014920-262786" has status "Ready":"True"
	I0325 01:56:07.771799  449514 node_ready.go:38] duration metric: took 12.006998068s waiting for node "kindnet-20220325014920-262786" to be "Ready" ...
	I0325 01:56:07.771807  449514 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 01:56:07.778025  449514 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-z9hnb" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:09.786294  449514 pod_ready.go:102] pod "coredns-64897985d-z9hnb" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:07.054544  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:10.088923  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:10.786828  449514 pod_ready.go:92] pod "coredns-64897985d-z9hnb" in "kube-system" namespace has status "Ready":"True"
	I0325 01:56:10.786856  449514 pod_ready.go:81] duration metric: took 3.008800727s waiting for pod "coredns-64897985d-z9hnb" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.786866  449514 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.790887  449514 pod_ready.go:92] pod "etcd-kindnet-20220325014920-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:56:10.790902  449514 pod_ready.go:81] duration metric: took 4.031015ms waiting for pod "etcd-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.790914  449514 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.794991  449514 pod_ready.go:92] pod "kube-apiserver-kindnet-20220325014920-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:56:10.795010  449514 pod_ready.go:81] duration metric: took 4.089112ms waiting for pod "kube-apiserver-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.795019  449514 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.798928  449514 pod_ready.go:92] pod "kube-controller-manager-kindnet-20220325014920-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:56:10.798944  449514 pod_ready.go:81] duration metric: took 3.918878ms waiting for pod "kube-controller-manager-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.798983  449514 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-td8lj" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.802733  449514 pod_ready.go:92] pod "kube-proxy-td8lj" in "kube-system" namespace has status "Ready":"True"
	I0325 01:56:10.802765  449514 pod_ready.go:81] duration metric: took 3.776283ms waiting for pod "kube-proxy-td8lj" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:10.802772  449514 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:11.183881  449514 pod_ready.go:92] pod "kube-scheduler-kindnet-20220325014920-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:56:11.183905  449514 pod_ready.go:81] duration metric: took 381.113148ms waiting for pod "kube-scheduler-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:56:11.183918  449514 pod_ready.go:38] duration metric: took 3.412101149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 01:56:11.183943  449514 api_server.go:51] waiting for apiserver process to appear ...
	I0325 01:56:11.184009  449514 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 01:56:11.193218  449514 api_server.go:71] duration metric: took 15.44529936s to wait for apiserver process to appear ...
	I0325 01:56:11.193243  449514 api_server.go:87] waiting for apiserver healthz status ...
	I0325 01:56:11.193254  449514 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 01:56:11.197515  449514 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0325 01:56:11.198294  449514 api_server.go:140] control plane version: v1.23.3
	I0325 01:56:11.198319  449514 api_server.go:130] duration metric: took 5.06978ms to wait for apiserver health ...
	I0325 01:56:11.198329  449514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 01:56:11.387118  449514 system_pods.go:59] 8 kube-system pods found
	I0325 01:56:11.387159  449514 system_pods.go:61] "coredns-64897985d-z9hnb" [5c577c70-7ba0-42f7-84cc-29706381a927] Running
	I0325 01:56:11.387167  449514 system_pods.go:61] "etcd-kindnet-20220325014920-262786" [89463790-47bc-4b54-bfe0-764eff89c367] Running
	I0325 01:56:11.387173  449514 system_pods.go:61] "kindnet-sqq6l" [f4681712-732f-4c97-a171-96743c9634a6] Running
	I0325 01:56:11.387180  449514 system_pods.go:61] "kube-apiserver-kindnet-20220325014920-262786" [838f24ab-2d9c-4d11-b4e5-5f32f133c6f7] Running
	I0325 01:56:11.387186  449514 system_pods.go:61] "kube-controller-manager-kindnet-20220325014920-262786" [68d99255-d9ca-4e07-bdb8-6e8d650d33c0] Running
	I0325 01:56:11.387192  449514 system_pods.go:61] "kube-proxy-td8lj" [47ac9435-9af3-4083-b483-959467fae74b] Running
	I0325 01:56:11.387199  449514 system_pods.go:61] "kube-scheduler-kindnet-20220325014920-262786" [ca1490d3-de38-48f7-94e1-06d6e9631bec] Running
	I0325 01:56:11.387210  449514 system_pods.go:61] "storage-provisioner" [42e3fbb5-5d56-42d0-bced-81ef5bdabd94] Running
	I0325 01:56:11.387219  449514 system_pods.go:74] duration metric: took 188.884495ms to wait for pod list to return data ...
	I0325 01:56:11.387231  449514 default_sa.go:34] waiting for default service account to be created ...
	I0325 01:56:11.584178  449514 default_sa.go:45] found service account: "default"
	I0325 01:56:11.584205  449514 default_sa.go:55] duration metric: took 196.964681ms for default service account to be created ...
	I0325 01:56:11.584213  449514 system_pods.go:116] waiting for k8s-apps to be running ...
	I0325 01:56:11.786817  449514 system_pods.go:86] 8 kube-system pods found
	I0325 01:56:11.786845  449514 system_pods.go:89] "coredns-64897985d-z9hnb" [5c577c70-7ba0-42f7-84cc-29706381a927] Running
	I0325 01:56:11.786850  449514 system_pods.go:89] "etcd-kindnet-20220325014920-262786" [89463790-47bc-4b54-bfe0-764eff89c367] Running
	I0325 01:56:11.786855  449514 system_pods.go:89] "kindnet-sqq6l" [f4681712-732f-4c97-a171-96743c9634a6] Running
	I0325 01:56:11.786859  449514 system_pods.go:89] "kube-apiserver-kindnet-20220325014920-262786" [838f24ab-2d9c-4d11-b4e5-5f32f133c6f7] Running
	I0325 01:56:11.786864  449514 system_pods.go:89] "kube-controller-manager-kindnet-20220325014920-262786" [68d99255-d9ca-4e07-bdb8-6e8d650d33c0] Running
	I0325 01:56:11.786868  449514 system_pods.go:89] "kube-proxy-td8lj" [47ac9435-9af3-4083-b483-959467fae74b] Running
	I0325 01:56:11.786872  449514 system_pods.go:89] "kube-scheduler-kindnet-20220325014920-262786" [ca1490d3-de38-48f7-94e1-06d6e9631bec] Running
	I0325 01:56:11.786875  449514 system_pods.go:89] "storage-provisioner" [42e3fbb5-5d56-42d0-bced-81ef5bdabd94] Running
	I0325 01:56:11.786880  449514 system_pods.go:126] duration metric: took 202.662306ms to wait for k8s-apps to be running ...
	I0325 01:56:11.786887  449514 system_svc.go:44] waiting for kubelet service to be running ....
	I0325 01:56:11.786926  449514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 01:56:11.796444  449514 system_svc.go:56] duration metric: took 9.549447ms WaitForService to wait for kubelet.
	I0325 01:56:11.796465  449514 kubeadm.go:548] duration metric: took 16.048553309s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0325 01:56:11.796487  449514 node_conditions.go:102] verifying NodePressure condition ...
	I0325 01:56:11.985209  449514 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 01:56:11.985238  449514 node_conditions.go:123] node cpu capacity is 8
	I0325 01:56:11.985255  449514 node_conditions.go:105] duration metric: took 188.763227ms to run NodePressure ...
	I0325 01:56:11.985267  449514 start.go:213] waiting for startup goroutines ...
	I0325 01:56:12.021944  449514 start.go:499] kubectl: 1.23.5, cluster: 1.23.3 (minor skew: 0)
	I0325 01:56:12.024417  449514 out.go:176] * Done! kubectl is now configured to use "kindnet-20220325014920-262786" cluster and "default" namespace by default
	I0325 01:56:09.127472  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:11.627092  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:10.950537  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:13.450494  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:13.121694  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:16.158628  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:13.627515  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:16.127205  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:15.950800  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:18.450140  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:19.193066  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:18.128126  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:20.128197  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:20.950489  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:23.450799  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:22.229496  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:25.263135  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:22.627521  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:24.628218  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:27.128007  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:25.950728  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:28.450548  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:28.299080  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:31.335036  431164 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0325 01:56:31.335082  431164 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0325 01:56:31.335570  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	W0325 01:56:31.369049  431164 delete.go:135] deletehost failed: Docker machine "old-k8s-version-20220325015306-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0325 01:56:31.369136  431164 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786
	I0325 01:56:31.404692  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:31.436643  431164 cli_runner.go:133] Run: docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0"
	W0325 01:56:31.469236  431164 cli_runner.go:180] docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0" returned with exit code 1
	I0325 01:56:31.469271  431164 oci.go:659] error shutdown old-k8s-version-20220325015306-262786: docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
	I0325 01:56:29.626998  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:32.127383  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:32.470272  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:32.503561  431164 oci.go:673] temporary error: container old-k8s-version-20220325015306-262786 status is  but expect it to be exited
	I0325 01:56:32.503590  431164 oci.go:679] Successfully shutdown container old-k8s-version-20220325015306-262786
	I0325 01:56:32.503641  431164 cli_runner.go:133] Run: docker rm -f -v old-k8s-version-20220325015306-262786
	I0325 01:56:32.540810  431164 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786
	W0325 01:56:32.570903  431164 cli_runner.go:180] docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:56:32.571005  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 01:56:32.601633  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 01:56:32.601695  431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
	I0325 01:56:32.601719  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
	W0325 01:56:32.632633  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:56:32.632663  431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220325015306-262786
	I0325 01:56:32.632678  431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220325015306-262786
	
	** /stderr **
	W0325 01:56:32.632818  431164 delete.go:139] delete failed (probably ok) <nil>
	I0325 01:56:32.632831  431164 fix.go:120] Sleeping 1 second for extra luck!
	I0325 01:56:33.633777  431164 start.go:127] createHost starting for "" (driver="docker")
	I0325 01:56:30.950428  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:33.449469  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:33.636953  431164 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0325 01:56:33.637111  431164 start.go:161] libmachine.API.Create for "old-k8s-version-20220325015306-262786" (driver="docker")
	I0325 01:56:33.637158  431164 client.go:168] LocalClient.Create starting
	I0325 01:56:33.637270  431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 01:56:33.637315  431164 main.go:130] libmachine: Decoding PEM data...
	I0325 01:56:33.637341  431164 main.go:130] libmachine: Parsing certificate...
	I0325 01:56:33.637420  431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 01:56:33.637448  431164 main.go:130] libmachine: Decoding PEM data...
	I0325 01:56:33.637471  431164 main.go:130] libmachine: Parsing certificate...
	I0325 01:56:33.637805  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 01:56:33.670584  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 01:56:33.670681  431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
	I0325 01:56:33.670699  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
	W0325 01:56:33.700818  431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
	I0325 01:56:33.700851  431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220325015306-262786
	I0325 01:56:33.700871  431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220325015306-262786
	
	** /stderr **
	I0325 01:56:33.700917  431164 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:56:33.731365  431164 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcb21d43dbbf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:db:45:ae:c5}}
	I0325 01:56:33.732243  431164 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-a040cc4bab62 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d0:f2:08:b6}}
	I0325 01:56:33.733015  431164 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-12bda0d2312e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:de:32:64:a8}}
	I0325 01:56:33.733812  431164 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc00013e8e8 192.168.76.0:0xc000702388] misses:0}
	I0325 01:56:33.733853  431164 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 01:56:33.733877  431164 network_create.go:106] attempt to create docker network old-k8s-version-20220325015306-262786 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0325 01:56:33.733929  431164 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220325015306-262786
	I0325 01:56:33.801121  431164 network_create.go:90] docker network old-k8s-version-20220325015306-262786 192.168.76.0/24 created
	I0325 01:56:33.801156  431164 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-20220325015306-262786" container
	I0325 01:56:33.801207  431164 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 01:56:33.833969  431164 cli_runner.go:133] Run: docker volume create old-k8s-version-20220325015306-262786 --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 01:56:33.863735  431164 oci.go:102] Successfully created a docker volume old-k8s-version-20220325015306-262786
	I0325 01:56:33.863800  431164 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20220325015306-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --entrypoint /usr/bin/test -v old-k8s-version-20220325015306-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 01:56:34.361286  431164 oci.go:106] Successfully prepared a docker volume old-k8s-version-20220325015306-262786
	I0325 01:56:34.361350  431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 01:56:34.361371  431164 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 01:56:34.361435  431164 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 01:56:34.128040  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:36.627385  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:35.450252  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:37.949875  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:39.128737  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:41.626936  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:39.950734  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:42.451036  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:43.174328  431164 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.812845537s)
	I0325 01:56:43.174371  431164 kic.go:188] duration metric: took 8.812995 seconds to extract preloaded images to volume
	W0325 01:56:43.174413  431164 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 01:56:43.174420  431164 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 01:56:43.174472  431164 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 01:56:43.265519  431164 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.76.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0325 01:56:43.664728  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Running}}
	I0325 01:56:43.700561  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:43.732786  431164 cli_runner.go:133] Run: docker exec old-k8s-version-20220325015306-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 01:56:43.800760  431164 oci.go:281] the created container "old-k8s-version-20220325015306-262786" has a running status.
	I0325 01:56:43.800796  431164 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa...
	I0325 01:56:43.897798  431164 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 01:56:44.005992  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:44.040565  431164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 01:56:44.040590  431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 01:56:44.141276  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:56:44.181329  431164 machine.go:88] provisioning docker machine ...
	I0325 01:56:44.181386  431164 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
	I0325 01:56:44.181456  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:44.218999  431164 main.go:130] libmachine: Using SSH client type: native
	I0325 01:56:44.219333  431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49539 <nil> <nil>}
	I0325 01:56:44.219364  431164 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
	I0325 01:56:44.346895  431164 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
	
	I0325 01:56:44.347002  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:44.378982  431164 main.go:130] libmachine: Using SSH client type: native
	I0325 01:56:44.379158  431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49539 <nil> <nil>}
	I0325 01:56:44.379177  431164 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 01:56:44.499114  431164 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 01:56:44.499153  431164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 01:56:44.499174  431164 ubuntu.go:177] setting up certificates
	I0325 01:56:44.499184  431164 provision.go:83] configureAuth start
	I0325 01:56:44.499239  431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 01:56:44.532553  431164 provision.go:138] copyHostCerts
	I0325 01:56:44.532637  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 01:56:44.532651  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 01:56:44.532750  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 01:56:44.532836  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 01:56:44.532855  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 01:56:44.532882  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 01:56:44.532930  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 01:56:44.532938  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 01:56:44.532957  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 01:56:44.532998  431164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
	I0325 01:56:44.716034  431164 provision.go:172] copyRemoteCerts
	I0325 01:56:44.716095  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 01:56:44.716131  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:44.750262  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:44.842652  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0325 01:56:44.860534  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 01:56:44.877456  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 01:56:44.894710  431164 provision.go:86] duration metric: configureAuth took 395.50834ms
	I0325 01:56:44.894744  431164 ubuntu.go:193] setting minikube options for container-runtime
	I0325 01:56:44.894925  431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:56:44.894941  431164 machine.go:91] provisioned docker machine in 713.577559ms
	I0325 01:56:44.894947  431164 client.go:171] LocalClient.Create took 11.257778857s
	I0325 01:56:44.894990  431164 start.go:169] duration metric: libmachine.API.Create for "old-k8s-version-20220325015306-262786" took 11.257879213s
	I0325 01:56:44.895011  431164 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
	I0325 01:56:44.895022  431164 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 01:56:44.895080  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 01:56:44.895130  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:44.927429  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:45.014679  431164 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 01:56:45.017487  431164 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 01:56:45.017516  431164 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 01:56:45.017525  431164 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 01:56:45.017530  431164 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 01:56:45.017538  431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 01:56:45.017604  431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 01:56:45.017669  431164 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 01:56:45.017744  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 01:56:45.024070  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:56:45.041483  431164 start.go:305] post-start completed in 146.454729ms
	I0325 01:56:45.041798  431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 01:56:45.076182  431164 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json ...
	I0325 01:56:45.076420  431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:56:45.076458  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:45.108209  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:45.195204  431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 01:56:45.198866  431164 start.go:130] duration metric: createHost completed in 11.565060546s
	I0325 01:56:45.198964  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	W0325 01:56:45.231974  431164 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 01:56:45.232009  431164 machine.go:88] provisioning docker machine ...
	I0325 01:56:45.232033  431164 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
	I0325 01:56:45.232086  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:45.262455  431164 main.go:130] libmachine: Using SSH client type: native
	I0325 01:56:45.262621  431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49539 <nil> <nil>}
	I0325 01:56:45.262636  431164 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
	I0325 01:56:45.386554  431164 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
	
	I0325 01:56:45.386637  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:45.419901  431164 main.go:130] libmachine: Using SSH client type: native
	I0325 01:56:45.420066  431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49539 <nil> <nil>}
	I0325 01:56:45.420098  431164 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 01:56:45.542421  431164 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 01:56:45.542450  431164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 01:56:45.542464  431164 ubuntu.go:177] setting up certificates
	I0325 01:56:45.542474  431164 provision.go:83] configureAuth start
	I0325 01:56:45.542517  431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 01:56:45.575074  431164 provision.go:138] copyHostCerts
	I0325 01:56:45.575139  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 01:56:45.575151  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 01:56:45.575209  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 01:56:45.575301  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 01:56:45.575311  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 01:56:45.575333  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 01:56:45.575380  431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 01:56:45.575388  431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 01:56:45.575407  431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 01:56:45.575453  431164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
	I0325 01:56:45.699927  431164 provision.go:172] copyRemoteCerts
	I0325 01:56:45.699978  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 01:56:45.700008  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:45.732608  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.059471  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 01:56:46.077602  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0325 01:56:46.094741  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 01:56:46.111752  431164 provision.go:86] duration metric: configureAuth took 569.266891ms
	I0325 01:56:46.111780  431164 ubuntu.go:193] setting minikube options for container-runtime
	I0325 01:56:46.111953  431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:56:46.111967  431164 machine.go:91] provisioned docker machine in 879.950952ms
	I0325 01:56:46.111977  431164 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
	I0325 01:56:46.111985  431164 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 01:56:46.112037  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 01:56:46.112083  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:46.146009  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.238610  431164 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 01:56:46.241357  431164 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 01:56:46.241383  431164 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 01:56:46.241391  431164 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 01:56:46.241399  431164 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 01:56:46.241413  431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 01:56:46.241465  431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 01:56:46.241560  431164 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 01:56:46.241650  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 01:56:46.248459  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:56:46.265464  431164 start.go:305] post-start completed in 153.469791ms
	I0325 01:56:46.265532  431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:56:46.265573  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:46.297032  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.382984  431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 01:56:46.387252  431164 fix.go:57] fixHost completed within 3m17.71088257s
	I0325 01:56:46.387290  431164 start.go:81] releasing machines lock for "old-k8s-version-20220325015306-262786", held for 3m17.710952005s
	I0325 01:56:46.387387  431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 01:56:46.430623  431164 ssh_runner.go:195] Run: sudo service crio stop
	I0325 01:56:46.430668  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:46.430668  431164 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 01:56:46.430720  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:56:46.467539  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:46.469867  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:56:43.627468  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:46.128274  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:44.950967  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:47.450049  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:46.901923  431164 openrc.go:165] stop output: 
	I0325 01:56:46.901990  431164 ssh_runner.go:195] Run: sudo service crio status
	I0325 01:56:46.918929  431164 docker.go:183] disabling docker service ...
	I0325 01:56:46.918994  431164 ssh_runner.go:195] Run: sudo service docker.socket stop
	I0325 01:56:47.285757  431164 openrc.go:165] stop output: 
	** stderr ** 
	Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	
	** /stderr **
	E0325 01:56:47.285792  431164 docker.go:186] "Failed to stop" err=<
		sudo service docker.socket stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	 > service="docker.socket"
	I0325 01:56:47.285838  431164 ssh_runner.go:195] Run: sudo service docker.service stop
	I0325 01:56:47.649755  431164 openrc.go:165] stop output: 
	** stderr ** 
	Failed to stop docker.service.service: Unit docker.service.service not loaded.
	
	** /stderr **
	E0325 01:56:47.649784  431164 docker.go:189] "Failed to stop" err=<
		sudo service docker.service stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.service.service: Unit docker.service.service not loaded.
	 > service="docker.service"
	W0325 01:56:47.649796  431164 cruntime.go:283] disable failed: sudo service docker.service stop: Process exited with status 5
	stdout:
	
	stderr:
	Failed to stop docker.service.service: Unit docker.service.service not loaded.
	I0325 01:56:47.649838  431164 ssh_runner.go:195] Run: sudo service docker status
	W0325 01:56:47.664778  431164 containerd.go:244] disableOthers: Docker is still active
	I0325 01:56:47.664901  431164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 01:56:47.676728  431164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 01:56:47.689398  431164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 01:56:47.695491  431164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 01:56:47.701670  431164 ssh_runner.go:195] Run: sudo service containerd restart
	I0325 01:56:47.775876  431164 openrc.go:152] restart output: 
	I0325 01:56:47.775911  431164 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 01:56:47.775957  431164 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 01:56:47.780036  431164 start.go:462] Will wait 60s for crictl version
	I0325 01:56:47.780095  431164 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:56:47.808499  431164 retry.go:31] will retry after 8.009118606s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T01:56:47Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 01:56:48.627787  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:51.128134  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:49.450281  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:51.950064  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:55.819167  431164 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:56:55.842809  431164 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 01:56:55.842867  431164 ssh_runner.go:195] Run: containerd --version
	I0325 01:56:55.862493  431164 ssh_runner.go:195] Run: containerd --version
	I0325 01:56:55.885291  431164 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	I0325 01:56:55.885389  431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:56:55.918381  431164 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0325 01:56:55.921728  431164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:56:55.933134  431164 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 01:56:55.933231  431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 01:56:55.933303  431164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:56:55.955768  431164 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:56:55.955788  431164 containerd.go:526] Images already preloaded, skipping extraction
	I0325 01:56:55.955828  431164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:56:55.979329  431164 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:56:55.979348  431164 cache_images.go:84] Images are preloaded, skipping loading
	I0325 01:56:55.979386  431164 ssh_runner.go:195] Run: sudo crictl info
	I0325 01:56:56.002748  431164 cni.go:93] Creating CNI manager for ""
	I0325 01:56:56.002768  431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 01:56:56.002779  431164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 01:56:56.002792  431164 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220325015306-262786 NodeName:old-k8s-version-20220325015306-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 01:56:56.002974  431164 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220325015306-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220325015306-262786
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 01:56:56.003083  431164 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220325015306-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 01:56:56.003141  431164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0325 01:56:56.009691  431164 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 01:56:56.009827  431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0325 01:56:56.016464  431164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 01:56:56.028607  431164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 01:56:56.041034  431164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0325 01:56:56.052949  431164 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0325 01:56:56.064655  431164 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0325 01:56:56.077971  431164 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0325 01:56:56.080686  431164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:56:56.089291  431164 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786 for IP: 192.168.76.2
	I0325 01:56:56.089415  431164 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 01:56:56.089479  431164 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 01:56:56.089550  431164 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key
	I0325 01:56:56.089574  431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt with IP's: []
	I0325 01:56:56.173943  431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt ...
	I0325 01:56:56.173977  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt: {Name:mk49efef0712da8d212d4d9821e0f44d60c00474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.174212  431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key ...
	I0325 01:56:56.174231  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key: {Name:mk717fd0b3391f00b7d69817a759d1a2ba6569e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.174386  431164 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25
	I0325 01:56:56.174407  431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 01:56:56.553488  431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 ...
	I0325 01:56:56.553520  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25: {Name:mk0db50f453f850e6693f5f3251d591297fe24c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.553723  431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25 ...
	I0325 01:56:56.553738  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25: {Name:mk44b3f12e50b4c043237e17ee319a130c7e6799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.553849  431164 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt
	I0325 01:56:56.553904  431164 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key
	I0325 01:56:56.553946  431164 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key
	I0325 01:56:56.553962  431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt with IP's: []
	I0325 01:56:56.634118  431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt ...
	I0325 01:56:56.634144  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt: {Name:mk41a988659c1306ddd1bb6feb42c4fcbdf737c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.634328  431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key ...
	I0325 01:56:56.634387  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key: {Name:mk496346cb1866d19fd00f75f3dc225361dc4fcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:56:56.634593  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 01:56:56.634634  431164 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 01:56:56.634643  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 01:56:56.634663  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 01:56:56.634688  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 01:56:56.634714  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 01:56:56.634755  431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:56:56.635301  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 01:56:56.653204  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 01:56:56.669615  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 01:56:56.686091  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0325 01:56:56.702278  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 01:56:56.718732  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 01:56:56.734704  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 01:56:56.751950  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 01:56:56.768370  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 01:56:56.785599  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 01:56:56.802704  431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 01:56:56.818636  431164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 01:56:56.830434  431164 ssh_runner.go:195] Run: openssl version
	I0325 01:56:56.834834  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 01:56:56.841688  431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:56:56.844759  431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:56:56.844799  431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:56:56.849420  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 01:56:56.856216  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 01:56:56.863401  431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 01:56:56.866302  431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 01:56:56.866341  431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 01:56:56.871090  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 01:56:56.878141  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 01:56:56.885043  431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 01:56:56.887974  431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 01:56:56.888019  431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 01:56:56.892629  431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 01:56:56.899573  431164 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:56:56.899669  431164 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 01:56:56.899700  431164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 01:56:56.924510  431164 cri.go:87] found id: ""
	I0325 01:56:56.924564  431164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 01:56:56.967274  431164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 01:56:56.974042  431164 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 01:56:56.974100  431164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 01:56:56.980509  431164 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 01:56:56.980549  431164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 01:56:53.628216  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:56.127805  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:54.450144  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:56.450569  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:58.450825  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:57.342628  431164 out.go:203]   - Generating certificates and keys ...
	I0325 01:57:00.421358  431164 out.go:203]   - Booting up control plane ...
	I0325 01:56:58.128581  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:00.627978  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:00.950520  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:03.450640  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:03.128282  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:05.627062  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:05.450918  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:07.950107  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:10.462463  431164 out.go:203]   - Configuring RBAC rules ...
	I0325 01:57:10.884078  431164 cni.go:93] Creating CNI manager for ""
	I0325 01:57:10.884101  431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 01:57:10.885886  431164 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 01:57:10.885957  431164 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 01:57:10.889349  431164 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0325 01:57:10.889369  431164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 01:57:10.902215  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 01:57:11.219931  431164 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 01:57:11.220013  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:11.220072  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=old-k8s-version-20220325015306-262786 minikube.k8s.io/updated_at=2022_03_25T01_57_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:11.227208  431164 ops.go:34] apiserver oom_adj: -16
	I0325 01:57:11.318580  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:07.627148  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:09.627800  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:11.628985  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:09.950750  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:11.951314  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:11.897565  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:12.397150  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:12.897044  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:13.397714  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:13.897135  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:14.396784  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:14.897509  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:15.397532  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:15.897241  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:16.397418  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:14.127368  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:16.128349  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:14.450849  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:16.451359  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:16.897298  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:17.397490  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:17.896851  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:18.396958  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:18.897528  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:19.397449  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:19.896818  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:20.396950  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:20.897730  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:21.397699  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:18.627452  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:21.127596  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:18.950702  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:20.950861  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:22.951062  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:21.897770  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:22.397129  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:22.897777  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:23.396809  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:23.897374  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:24.396808  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:24.897374  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:25.397510  431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:57:25.465074  431164 kubeadm.go:1020] duration metric: took 14.245126743s to wait for elevateKubeSystemPrivileges.
	I0325 01:57:25.465105  431164 kubeadm.go:393] StartCluster complete in 28.565542464s
	I0325 01:57:25.465127  431164 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:57:25.465222  431164 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:57:25.466826  431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:57:25.982566  431164 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220325015306-262786" rescaled to 1
	I0325 01:57:25.982642  431164 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:57:25.985735  431164 out.go:176] * Verifying Kubernetes components...
	I0325 01:57:25.982729  431164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 01:57:25.985818  431164 ssh_runner.go:195] Run: sudo service kubelet status
	I0325 01:57:25.982734  431164 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 01:57:25.982930  431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:57:25.985917  431164 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220325015306-262786"
	I0325 01:57:25.985938  431164 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220325015306-262786"
	W0325 01:57:25.985944  431164 addons.go:165] addon storage-provisioner should already be in state true
	I0325 01:57:25.985974  431164 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
	I0325 01:57:25.987026  431164 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220325015306-262786"
	I0325 01:57:25.987059  431164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220325015306-262786"
	I0325 01:57:25.987464  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:57:25.987734  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:57:26.043330  431164 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 01:57:26.041809  431164 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220325015306-262786"
	W0325 01:57:26.043448  431164 addons.go:165] addon default-storageclass should already be in state true
	I0325 01:57:26.043461  431164 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:57:26.043473  431164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 01:57:26.043499  431164 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
	I0325 01:57:26.043528  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:57:26.043990  431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 01:57:26.079480  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:57:26.080003  431164 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 01:57:26.080025  431164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 01:57:26.080072  431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 01:57:26.123901  431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 01:57:26.130675  431164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 01:57:26.132207  431164 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 01:57:26.203910  431164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:57:26.305985  431164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 01:57:26.701311  431164 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0325 01:57:23.627677  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:25.629078  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:25.451476  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:27.950005  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:26.884863  431164 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 01:57:26.884915  431164 addons.go:417] enableAddons completed in 902.209882ms
	I0325 01:57:28.137240  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:30.137382  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:28.127454  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:30.127857  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:30.450420  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:32.951294  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:32.137902  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:34.636994  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:36.637231  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:32.627061  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:34.627281  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:36.627716  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:35.450505  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:37.950444  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:38.637618  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:41.138151  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:38.628044  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:41.128288  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:40.450506  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:42.450985  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:43.637420  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:46.137000  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:43.627437  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:45.629027  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:44.949672  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:46.950297  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:48.137252  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:50.137524  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:48.127262  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:50.627821  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:49.450175  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:51.450356  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:52.638010  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:55.137979  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:52.628171  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:54.629330  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:57.127108  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:53.950613  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:56.449946  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:58.450110  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:57.637645  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:00.137151  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:57:59.127720  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:01.627485  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:00.451216  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:02.950770  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:02.137531  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:04.137755  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:06.637823  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:04.127661  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:06.127944  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:05.450556  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:07.451055  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:09.137247  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:11.137649  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:08.627986  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:11.127221  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:09.949891  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:11.950918  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:13.138175  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:15.637967  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:13.127386  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:15.628308  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:14.450791  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:16.949899  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:18.137346  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:20.137621  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:18.127661  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:20.627067  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:19.450727  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:21.950126  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:22.138039  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:24.637505  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:26.637944  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:22.627669  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:25.127702  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:24.450063  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:26.450260  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:28.450830  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:28.638663  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:31.137778  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:27.627188  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:29.627870  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:32.127319  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:30.950663  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:33.450641  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:33.137957  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:35.637360  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:34.127627  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:36.128027  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:35.950344  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:38.450663  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:37.637456  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:40.137522  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:38.128157  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:40.627116  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:40.949547  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:42.950881  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:42.637830  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:44.638149  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:42.627366  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:44.627901  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:47.127092  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:45.450029  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:47.951426  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:47.137013  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:49.137465  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:51.137831  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:49.127972  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:51.627645  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:50.450644  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:52.949935  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:53.138061  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:55.637301  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:54.128948  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:56.627956  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:54.453764  440243 pod_ready.go:81] duration metric: took 4m0.014071871s waiting for pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace to be "Ready" ...
	E0325 01:58:54.453795  440243 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0325 01:58:54.453817  440243 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-srh8z" in "kube-system" namespace to be "Ready" ...
	I0325 01:58:56.465394  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:58.466164  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:57.637937  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:00.137993  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:58:59.131509  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:01.626847  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:00.466246  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:02.466356  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:02.138041  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:04.138262  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:06.637907  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:03.627991  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:06.128421  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:04.466551  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:06.965390  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:09.139879  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:11.637442  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:08.627165  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:10.628040  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:08.965530  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:10.966031  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:12.966329  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:13.637538  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:15.639122  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:13.127579  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:15.127811  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:15.466507  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:17.966052  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:18.137261  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:20.137829  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:17.628001  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:20.127516  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:20.631571  442784 pod_ready.go:81] duration metric: took 4m0.015353412s waiting for pod "coredns-64897985d-qsk2c" in "kube-system" namespace to be "Ready" ...
	E0325 01:59:20.631596  442784 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0325 01:59:20.631606  442784 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-x2v4t" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.633133  442784 pod_ready.go:97] error getting pod "coredns-64897985d-x2v4t" in "kube-system" namespace (skipping!): pods "coredns-64897985d-x2v4t" not found
	I0325 01:59:20.633152  442784 pod_ready.go:81] duration metric: took 1.540051ms waiting for pod "coredns-64897985d-x2v4t" in "kube-system" namespace to be "Ready" ...
	E0325 01:59:20.633160  442784 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-x2v4t" in "kube-system" namespace (skipping!): pods "coredns-64897985d-x2v4t" not found
	I0325 01:59:20.633166  442784 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.637747  442784 pod_ready.go:92] pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:20.637768  442784 pod_ready.go:81] duration metric: took 4.596316ms waiting for pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.637780  442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.642175  442784 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:20.642191  442784 pod_ready.go:81] duration metric: took 4.404746ms waiting for pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.642200  442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.825032  442784 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:20.825054  442784 pod_ready.go:81] duration metric: took 182.848289ms waiting for pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.825064  442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-zv4v5" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:21.225297  442784 pod_ready.go:92] pod "kube-proxy-zv4v5" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:21.225318  442784 pod_ready.go:81] duration metric: took 400.248182ms waiting for pod "kube-proxy-zv4v5" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:21.225330  442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:21.625682  442784 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:21.625709  442784 pod_ready.go:81] duration metric: took 400.371185ms waiting for pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:21.625721  442784 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-fm6bn" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:19.966529  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:22.465926  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:22.637466  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:24.637948  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:24.032172  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:26.531791  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:24.466219  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:26.466280  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:27.137486  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:29.137528  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:31.137566  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:29.031481  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:31.531040  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:28.965577  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:31.465336  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:33.138065  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:35.637535  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:33.532410  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:36.031682  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:33.965866  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:36.465716  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:38.465892  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:37.637991  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:39.638114  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:38.530826  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:40.531743  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:40.966108  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:43.465526  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:42.137688  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:44.637241  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:46.637686  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:43.031806  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:45.531386  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:45.465729  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:47.967747  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:49.137625  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:51.638236  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:47.531588  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:49.531656  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:52.031683  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:50.466203  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:52.966050  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:54.137670  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:56.138392  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:54.032187  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:56.531093  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:55.466240  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:57.466490  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:58.637751  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:00.638089  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 01:59:58.531417  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:00.531966  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:59.966109  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:02.465830  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:03.137541  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:05.637552  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:03.031649  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:05.531282  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:04.965956  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:06.968356  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:08.137145  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:10.137534  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:07.531455  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:09.531699  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:12.032938  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:09.466106  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:11.965385  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:12.637732  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:15.138150  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:14.531694  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:16.531949  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:13.966907  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:16.466246  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:18.466374  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:17.637995  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:20.137994  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:19.031660  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:21.531699  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:20.966050  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:23.466019  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:22.637195  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:24.638276  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:24.032516  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:26.531380  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:25.466373  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:27.966783  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:27.137477  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:29.138059  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:31.138114  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:29.030968  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:31.031214  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:30.466003  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:32.966003  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:33.637955  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:35.638305  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:33.531559  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:36.031302  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:35.466050  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:37.966004  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:38.137342  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:40.138018  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:38.031823  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:40.531380  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:40.465841  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:42.966235  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:42.638060  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:45.137181  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:43.031453  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:45.031822  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:45.465711  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:47.965776  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:47.137290  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:49.137908  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:51.638340  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:47.531831  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:50.032302  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:49.966476  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:52.466039  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:54.137713  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:56.637016  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:52.531720  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:55.031662  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:54.966085  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:56.966359  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:58.637267  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:00.637464  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:00:57.531581  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:00.030994  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:02.031443  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:58.967286  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:01.466350  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:03.466445  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:02.638041  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:05.137294  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:04.031865  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:06.031960  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:05.966165  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:07.966194  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:07.137350  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:09.137969  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:11.638005  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:08.032116  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:10.532240  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:10.466003  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:12.466535  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:14.137955  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:16.637434  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:13.031864  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:15.531085  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:14.966152  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:17.466878  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:18.637978  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:21.137203  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:17.532033  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:20.031731  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:22.034273  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:19.966129  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:21.966568  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:23.137475  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:25.137628  431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:01:26.139331  431164 node_ready.go:38] duration metric: took 4m0.007092133s waiting for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 02:01:26.141382  431164 out.go:176] 
	W0325 02:01:26.141510  431164 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:01:26.141527  431164 out.go:241] * 
	W0325 02:01:26.142250  431164 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	079cd3357f1fd       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   0b7c839dde6fb
	8e7808702d5d6       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   0b7c839dde6fb
	f84fedf62f62a       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   8329903e5a1d1
	2a8a16a4c5ab0       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   6257dca791a92
	0dcaa5ddf16d7       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   4f6ca772f8d74
	0f2defa775551       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   64b5b98ae89a8
	1366a173f44ad       b2756210eeabf       4 minutes ago        Running             etcd                      0                   f07b14711b6c4
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 01:56:43 UTC, end at Fri 2022-03-25 02:01:27 UTC. --
	Mar 25 01:57:01 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:01.788397018Z" level=info msg="StartContainer for \"0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95\" returns successfully"
	Mar 25 01:57:01 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:01.788552920Z" level=info msg="StartContainer for \"2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0\" returns successfully"
	Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.717807531Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.957585408Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-wxllf,Uid:8df13659-eaff-4414-b783-5e971e2dae50,Namespace:kube-system,Attempt:0,}"
	Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.957585630Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-rx7hj,Uid:bf35a126-09fa-4db9-9aa4-2cb811bf4595,Namespace:kube-system,Attempt:0,}"
	Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.982307374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8329903e5a1d1c800be8e2125d67bf84ec79a4aa9d91a6c8ba109f8ad1951fe0 pid=2399
	Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.985207180Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb pid=2414
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.097668598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wxllf,Uid:8df13659-eaff-4414-b783-5e971e2dae50,Namespace:kube-system,Attempt:0,} returns sandbox id \"8329903e5a1d1c800be8e2125d67bf84ec79a4aa9d91a6c8ba109f8ad1951fe0\""
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.101054577Z" level=info msg="CreateContainer within sandbox \"8329903e5a1d1c800be8e2125d67bf84ec79a4aa9d91a6c8ba109f8ad1951fe0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.194056671Z" level=info msg="CreateContainer within sandbox \"8329903e5a1d1c800be8e2125d67bf84ec79a4aa9d91a6c8ba109f8ad1951fe0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e\""
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.194625839Z" level=info msg="StartContainer for \"f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e\""
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.207575966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-rx7hj,Uid:bf35a126-09fa-4db9-9aa4-2cb811bf4595,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\""
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.210667921Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.305123248Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\""
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.306084994Z" level=info msg="StartContainer for \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\""
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.489647000Z" level=info msg="StartContainer for \"f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e\" returns successfully"
	Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.690812432Z" level=info msg="StartContainer for \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\" returns successfully"
	Mar 25 02:00:06 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:06.930815730Z" level=info msg="shim disconnected" id=8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7
	Mar 25 02:00:06 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:06.930882983Z" level=warning msg="cleaning up after shim disconnected" id=8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7 namespace=k8s.io
	Mar 25 02:00:06 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:06.930895328Z" level=info msg="cleaning up dead shim"
	Mar 25 02:00:06 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:06.940936267Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:00:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3317\n"
	Mar 25 02:00:07 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:07.016635529Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Mar 25 02:00:07 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:07.031284315Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\""
	Mar 25 02:00:07 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:07.031721233Z" level=info msg="StartContainer for \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\""
	Mar 25 02:00:07 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:07.104353921Z" level=info msg="StartContainer for \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220325015306-262786
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220325015306-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=old-k8s-version-20220325015306-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T01_57_11_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 01:57:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:01:06 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:01:06 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:01:06 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:01:06 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220325015306-262786
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                586019ba-8c2c-445d-9550-f545f1f4ef4d
	 Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	 Kernel Version:             5.13.0-1021-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220325015306-262786                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                kindnet-rx7hj                                                    100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                kube-apiserver-old-k8s-version-20220325015306-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                kube-controller-manager-old-k8s-version-20220325015306-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m16s
	  kube-system                kube-proxy-wxllf                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                kube-scheduler-old-k8s-version-20220325015306-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                               Message
	  ----    ------                   ----                   ----                                               -------
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet, old-k8s-version-20220325015306-262786     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m27s)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m27s)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m27s)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                   kube-proxy, old-k8s-version-20220325015306-262786  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
	[  +7.006669] IPv4: martian source 10.85.0.21 from 10.85.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 e5 8c f4 3b 15 08 06
	[Mar25 02:00] IPv4: martian source 10.85.0.22 from 10.85.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e 3a e8 93 0f f0 08 06
	[ +11.785527] IPv4: martian source 10.85.0.23 from 10.85.0.23, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 52 04 02 2c 26 08 06
	[  +8.370268] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
	[  +0.000006] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
	[  +4.995582] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
	[  +0.000006] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
	[  +1.989183] IPv4: martian source 10.85.0.24 from 10.85.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 76 31 e2 ca 4f 08 06
	[  +3.010521] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
	[  +0.000006] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
	[ +12.328647] IPv4: martian source 10.85.0.25 from 10.85.0.25, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ae 44 d1 e4 c8 08 06
	[Mar25 02:01] IPv4: martian source 10.85.0.26 from 10.85.0.26, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a f5 b0 71 56 83 08 06
	[ +12.211857] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
	[  +0.000007] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
	[  +4.294695] IPv4: martian source 10.85.0.27 from 10.85.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 99 aa 90 20 5f 08 06
	[  +0.701780] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
	[  +0.000005] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
	
	* 
	* ==> etcd [1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa] <==
	* 2022-03-25 01:57:01.789418 I | etcdserver: initial cluster = old-k8s-version-20220325015306-262786=https://192.168.76.2:2380
	2022-03-25 01:57:01.795636 I | etcdserver: starting member ea7e25599daad906 in cluster 6f20f2c4b2fb5f8a
	2022-03-25 01:57:01.795668 I | raft: ea7e25599daad906 became follower at term 0
	2022-03-25 01:57:01.795679 I | raft: newRaft ea7e25599daad906 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-03-25 01:57:01.795684 I | raft: ea7e25599daad906 became follower at term 1
	2022-03-25 01:57:01.803372 W | auth: simple token is not cryptographically signed
	2022-03-25 01:57:01.806268 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-03-25 01:57:01.807413 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-03-25 01:57:01.807883 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2022-03-25 01:57:01.808954 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-03-25 01:57:01.809140 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-03-25 01:57:01.809206 I | embed: listening for metrics on http://192.168.76.2:2381
	2022-03-25 01:57:02.596023 I | raft: ea7e25599daad906 is starting a new election at term 1
	2022-03-25 01:57:02.596060 I | raft: ea7e25599daad906 became candidate at term 2
	2022-03-25 01:57:02.596077 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2022-03-25 01:57:02.596090 I | raft: ea7e25599daad906 became leader at term 2
	2022-03-25 01:57:02.596097 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2022-03-25 01:57:02.596295 I | etcdserver: setting up the initial cluster version to 3.3
	2022-03-25 01:57:02.597359 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-03-25 01:57:02.597406 I | etcdserver/api: enabled capabilities for version 3.3
	2022-03-25 01:57:02.597440 I | etcdserver: published {Name:old-k8s-version-20220325015306-262786 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2022-03-25 01:57:02.597617 I | embed: ready to serve client requests
	2022-03-25 01:57:02.597747 I | embed: ready to serve client requests
	2022-03-25 01:57:02.600650 I | embed: serving client requests on 192.168.76.2:2379
	2022-03-25 01:57:02.601990 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  02:01:27 up  4:39,  0 users,  load average: 1.41, 1.82, 1.95
	Linux old-k8s-version-20220325015306-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0] <==
	* I0325 01:57:05.741087       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0325 01:57:05.742225       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.76.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0325 01:57:05.747229       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
	I0325 01:57:05.747261       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0325 01:57:05.883908       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 01:57:05.883932       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 01:57:05.884126       1 cache.go:39] Caches are synced for autoregister controller
	I0325 01:57:05.884201       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0325 01:57:06.739679       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0325 01:57:06.739704       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 01:57:06.739717       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 01:57:06.743177       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0325 01:57:06.747597       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0325 01:57:06.747620       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0325 01:57:07.493498       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 01:57:08.520754       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 01:57:08.800880       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0325 01:57:09.114170       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0325 01:57:09.114813       1 controller.go:606] quota admission added evaluator for: endpoints
	I0325 01:57:09.966541       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0325 01:57:10.500104       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0325 01:57:10.871924       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0325 01:57:25.143684       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0325 01:57:25.153906       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0325 01:57:25.619240       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95] <==
	* I0325 01:57:25.519444       1 shared_informer.go:204] Caches are synced for disruption 
	I0325 01:57:25.519471       1 disruption.go:341] Sending events to api server.
	I0325 01:57:25.564979       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0325 01:57:25.567532       1 shared_informer.go:204] Caches are synced for node 
	I0325 01:57:25.567556       1 range_allocator.go:172] Starting range CIDR allocator
	I0325 01:57:25.567570       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
	I0325 01:57:25.569098       1 shared_informer.go:204] Caches are synced for HPA 
	I0325 01:57:25.569516       1 shared_informer.go:204] Caches are synced for TTL 
	I0325 01:57:25.615069       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0325 01:57:25.619293       1 shared_informer.go:204] Caches are synced for taint 
	I0325 01:57:25.619399       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: 
	W0325 01:57:25.619533       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-20220325015306-262786. Assuming now as a timestamp.
	I0325 01:57:25.619601       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0325 01:57:25.619813       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0325 01:57:25.619960       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20220325015306-262786", UID:"f6951a5c-6edc-46f8-beec-3c90a8b9581c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20220325015306-262786 event: Registered Node old-k8s-version-20220325015306-262786 in Controller
	I0325 01:57:25.627002       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"9ebcce20-95c8-46a7-994a-18f1bc7bd92e", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rx7hj
	I0325 01:57:25.629138       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"6d02422f-16d5-4e4d-a5bf-93392a263b1e", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-wxllf
	E0325 01:57:25.636892       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"6d02422f-16d5-4e4d-a5bf-93392a263b1e", ResourceVersion:"221", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63783770230, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b60f60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001c3a040), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b60f80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b60fa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001b60fe0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001685180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016811f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001643860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002ceee8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001681238)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0325 01:57:25.667564       1 shared_informer.go:204] Caches are synced for resource quota 
	I0325 01:57:25.667705       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0325 01:57:25.669937       1 shared_informer.go:204] Caches are synced for resource quota 
	I0325 01:57:25.670463       1 range_allocator.go:359] Set node old-k8s-version-20220325015306-262786 PodCIDR to [10.244.0.0/24]
	I0325 01:57:25.679094       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0325 01:57:25.722642       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0325 01:57:25.722667       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e] <==
	* W0325 01:57:26.609517       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0325 01:57:26.688448       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0325 01:57:26.688492       1 server_others.go:149] Using iptables Proxier.
	I0325 01:57:26.688881       1 server.go:529] Version: v1.16.0
	I0325 01:57:26.690169       1 config.go:131] Starting endpoints config controller
	I0325 01:57:26.690202       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0325 01:57:26.690377       1 config.go:313] Starting service config controller
	I0325 01:57:26.690393       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0325 01:57:26.790460       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0325 01:57:26.790538       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a] <==
	* I0325 01:57:05.800810       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0325 01:57:05.892456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 01:57:05.892758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 01:57:05.892875       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 01:57:05.892975       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 01:57:05.893150       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:05.893319       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:05.893573       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 01:57:05.894058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 01:57:05.894470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 01:57:05.894601       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 01:57:05.894681       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 01:57:06.894818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 01:57:06.895872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 01:57:06.897095       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 01:57:06.898221       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 01:57:06.899310       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:06.900400       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:06.901503       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 01:57:06.902607       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 01:57:06.903724       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 01:57:06.904742       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 01:57:06.905998       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 01:57:25.156410       1 factory.go:585] pod is already present in the activeQ
	E0325 01:57:25.162943       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 01:56:43 UTC, end at Fri 2022-03-25 02:01:27 UTC. --
	Mar 25 01:59:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:25.901732    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 01:59:30 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:30.902249    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 01:59:35 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:35.902903    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 01:59:40 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:40.903408    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 01:59:45 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:45.904051    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 01:59:50 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:50.904601    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 01:59:55 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:55.905273    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:00 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:00.905816    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:05 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:05.906484    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:10 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:10.906998    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:15 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:15.907739    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:20 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:20.908340    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:25.909014    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:30 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:30.909847    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:35 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:35.910486    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:40 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:40.911183    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:45 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:45.911882    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:50 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:50.912489    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:00:55 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:55.913113    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:01:00 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:00.913828    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:01:05 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:05.914441    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:01:10 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:10.915161    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:01:15 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:15.915890    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:01:20 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:20.916693    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:01:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:25.917325    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-5644d7b6d9-trm4j storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-trm4j storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-trm4j storage-provisioner: exit status 1 (49.806283ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-trm4j" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-trm4j storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (501.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (525.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220325014921-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220325014921-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m45.813372118s)

                                                
                                                
-- stdout --
	* [calico-20220325014921-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20220325014921-262786 in cluster calico-20220325014921-262786
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 01:54:08.729978  440243 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:54:08.730073  440243 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:54:08.730077  440243 out.go:310] Setting ErrFile to fd 2...
	I0325 01:54:08.730081  440243 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:54:08.730186  440243 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:54:08.730435  440243 out.go:304] Setting JSON to false
	I0325 01:54:08.731974  440243 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16321,"bootTime":1648156928,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:54:08.732047  440243 start.go:125] virtualization: kvm guest
	I0325 01:54:08.734786  440243 out.go:176] * [calico-20220325014921-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 01:54:08.736557  440243 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 01:54:08.734993  440243 notify.go:193] Checking for updates...
	I0325 01:54:08.738160  440243 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 01:54:08.739633  440243 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:54:08.741445  440243 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 01:54:08.743077  440243 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 01:54:08.743747  440243 config.go:176] Loaded profile config "cilium-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:54:08.743854  440243 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:54:08.743932  440243 config.go:176] Loaded profile config "running-upgrade-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0325 01:54:08.743969  440243 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 01:54:08.792366  440243 docker.go:136] docker version: linux-20.10.14
	I0325 01:54:08.792474  440243 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:54:08.903640  440243 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:54 SystemTime:2022-03-25 01:54:08.830612699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:54:08.903802  440243 docker.go:253] overlay module found
	I0325 01:54:08.905980  440243 out.go:176] * Using the docker driver based on user configuration
	I0325 01:54:08.906008  440243 start.go:284] selected driver: docker
	I0325 01:54:08.906013  440243 start.go:801] validating driver "docker" against <nil>
	I0325 01:54:08.906030  440243 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 01:54:08.906081  440243 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 01:54:08.906100  440243 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 01:54:08.907688  440243 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 01:54:08.908343  440243 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:54:09.005411  440243 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:54 SystemTime:2022-03-25 01:54:08.942531096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:54:09.005560  440243 start_flags.go:290] no existing cluster config was found, will generate one from the flags 
	I0325 01:54:09.005759  440243 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 01:54:09.005787  440243 cni.go:93] Creating CNI manager for "calico"
	I0325 01:54:09.005794  440243 start_flags.go:299] Found "Calico" CNI - setting NetworkPlugin=cni
	I0325 01:54:09.005805  440243 start_flags.go:304] config:
	{Name:calico-20220325014921-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:calico-20220325014921-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:54:09.008231  440243 out.go:176] * Starting control plane node calico-20220325014921-262786 in cluster calico-20220325014921-262786
	I0325 01:54:09.008268  440243 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 01:54:09.009674  440243 out.go:176] * Pulling base image ...
	I0325 01:54:09.009709  440243 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:54:09.009761  440243 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 01:54:09.009787  440243 cache.go:57] Caching tarball of preloaded images
	I0325 01:54:09.009798  440243 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 01:54:09.010035  440243 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 01:54:09.010055  440243 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 01:54:09.010191  440243 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/config.json ...
	I0325 01:54:09.010222  440243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/config.json: {Name:mkd775fa96e06102d8f75f3a889ba289982b7b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:09.045802  440243 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 01:54:09.045826  440243 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 01:54:09.045844  440243 cache.go:208] Successfully downloaded all kic artifacts
	I0325 01:54:09.045883  440243 start.go:348] acquiring machines lock for calico-20220325014921-262786: {Name:mkdc49cc7d155f3a36b018381959659086377d58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 01:54:09.046004  440243 start.go:352] acquired machines lock for "calico-20220325014921-262786" in 101.673µs
	I0325 01:54:09.046035  440243 start.go:90] Provisioning new machine with config: &{Name:calico-20220325014921-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:calico-20220325014921-262786 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:54:09.046134  440243 start.go:127] createHost starting for "" (driver="docker")
	I0325 01:54:09.048936  440243 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0325 01:54:09.049161  440243 start.go:161] libmachine.API.Create for "calico-20220325014921-262786" (driver="docker")
	I0325 01:54:09.049190  440243 client.go:168] LocalClient.Create starting
	I0325 01:54:09.049237  440243 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 01:54:09.049272  440243 main.go:130] libmachine: Decoding PEM data...
	I0325 01:54:09.049288  440243 main.go:130] libmachine: Parsing certificate...
	I0325 01:54:09.049342  440243 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 01:54:09.049359  440243 main.go:130] libmachine: Decoding PEM data...
	I0325 01:54:09.049373  440243 main.go:130] libmachine: Parsing certificate...
	I0325 01:54:09.049654  440243 cli_runner.go:133] Run: docker network inspect calico-20220325014921-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 01:54:09.083617  440243 cli_runner.go:180] docker network inspect calico-20220325014921-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 01:54:09.083728  440243 network_create.go:254] running [docker network inspect calico-20220325014921-262786] to gather additional debugging logs...
	I0325 01:54:09.083756  440243 cli_runner.go:133] Run: docker network inspect calico-20220325014921-262786
	W0325 01:54:09.116784  440243 cli_runner.go:180] docker network inspect calico-20220325014921-262786 returned with exit code 1
	I0325 01:54:09.116821  440243 network_create.go:257] error running [docker network inspect calico-20220325014921-262786]: docker network inspect calico-20220325014921-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220325014921-262786
	I0325 01:54:09.116853  440243 network_create.go:259] output of [docker network inspect calico-20220325014921-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220325014921-262786
	
	** /stderr **
	I0325 01:54:09.116916  440243 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:54:09.148993  440243 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000132328] misses:0}
	I0325 01:54:09.149046  440243 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 01:54:09.149061  440243 network_create.go:106] attempt to create docker network calico-20220325014921-262786 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0325 01:54:09.149103  440243 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220325014921-262786
	I0325 01:54:09.220857  440243 network_create.go:90] docker network calico-20220325014921-262786 192.168.49.0/24 created
	I0325 01:54:09.220889  440243 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20220325014921-262786" container
	I0325 01:54:09.220955  440243 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 01:54:09.262029  440243 cli_runner.go:133] Run: docker volume create calico-20220325014921-262786 --label name.minikube.sigs.k8s.io=calico-20220325014921-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 01:54:09.299807  440243 oci.go:102] Successfully created a docker volume calico-20220325014921-262786
	I0325 01:54:09.299870  440243 cli_runner.go:133] Run: docker run --rm --name calico-20220325014921-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220325014921-262786 --entrypoint /usr/bin/test -v calico-20220325014921-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 01:54:09.878440  440243 oci.go:106] Successfully prepared a docker volume calico-20220325014921-262786
	I0325 01:54:09.878524  440243 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:54:09.878544  440243 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 01:54:09.878610  440243 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220325014921-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 01:54:20.276325  440243 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220325014921-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (10.397661894s)
	I0325 01:54:20.276361  440243 kic.go:188] duration metric: took 10.397813 seconds to extract preloaded images to volume
	W0325 01:54:20.276393  440243 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 01:54:20.276402  440243 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 01:54:20.276455  440243 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 01:54:20.389088  440243 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220325014921-262786 --name calico-20220325014921-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220325014921-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220325014921-262786 --network calico-20220325014921-262786 --ip 192.168.49.2 --volume calico-20220325014921-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0325 01:54:21.024823  440243 cli_runner.go:133] Run: docker container inspect calico-20220325014921-262786 --format={{.State.Running}}
	I0325 01:54:21.068115  440243 cli_runner.go:133] Run: docker container inspect calico-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:21.108779  440243 cli_runner.go:133] Run: docker exec calico-20220325014921-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 01:54:21.188924  440243 oci.go:281] the created container "calico-20220325014921-262786" has a running status.
	I0325 01:54:21.188965  440243 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa...
	I0325 01:54:21.338533  440243 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 01:54:21.524924  440243 cli_runner.go:133] Run: docker container inspect calico-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:21.575309  440243 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 01:54:21.575333  440243 kic_runner.go:114] Args: [docker exec --privileged calico-20220325014921-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 01:54:21.680868  440243 cli_runner.go:133] Run: docker container inspect calico-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:21.722560  440243 machine.go:88] provisioning docker machine ...
	I0325 01:54:21.722604  440243 ubuntu.go:169] provisioning hostname "calico-20220325014921-262786"
	I0325 01:54:21.722660  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:21.765097  440243 main.go:130] libmachine: Using SSH client type: native
	I0325 01:54:21.765299  440243 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49524 <nil> <nil>}
	I0325 01:54:21.765323  440243 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220325014921-262786 && echo "calico-20220325014921-262786" | sudo tee /etc/hostname
	I0325 01:54:22.055866  440243 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220325014921-262786
	
	I0325 01:54:22.055967  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:22.090181  440243 main.go:130] libmachine: Using SSH client type: native
	I0325 01:54:22.090330  440243 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49524 <nil> <nil>}
	I0325 01:54:22.090349  440243 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220325014921-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220325014921-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220325014921-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 01:54:22.218538  440243 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 01:54:22.218573  440243 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 01:54:22.218591  440243 ubuntu.go:177] setting up certificates
	I0325 01:54:22.218601  440243 provision.go:83] configureAuth start
	I0325 01:54:22.218649  440243 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220325014921-262786
	I0325 01:54:22.251092  440243 provision.go:138] copyHostCerts
	I0325 01:54:22.251174  440243 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 01:54:22.251190  440243 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 01:54:22.251259  440243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 01:54:22.251337  440243 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 01:54:22.251348  440243 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 01:54:22.251373  440243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 01:54:22.251433  440243 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 01:54:22.251442  440243 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 01:54:22.251461  440243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 01:54:22.251514  440243 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.calico-20220325014921-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220325014921-262786]
	I0325 01:54:22.317567  440243 provision.go:172] copyRemoteCerts
	I0325 01:54:22.317624  440243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 01:54:22.317653  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:22.369868  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49524 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:22.467587  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 01:54:22.487047  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0325 01:54:22.522510  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 01:54:22.545153  440243 provision.go:86] duration metric: configureAuth took 326.541544ms
	I0325 01:54:22.545185  440243 ubuntu.go:193] setting minikube options for container-runtime
	I0325 01:54:22.545376  440243 config.go:176] Loaded profile config "calico-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:54:22.545395  440243 machine.go:91] provisioned docker machine in 822.809414ms
	I0325 01:54:22.545402  440243 client.go:171] LocalClient.Create took 13.49620623s
	I0325 01:54:22.545414  440243 start.go:169] duration metric: libmachine.API.Create for "calico-20220325014921-262786" took 13.496253632s
	I0325 01:54:22.545426  440243 start.go:302] post-start starting for "calico-20220325014921-262786" (driver="docker")
	I0325 01:54:22.545431  440243 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 01:54:22.545472  440243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 01:54:22.545523  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:22.587027  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49524 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:22.682793  440243 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 01:54:22.685791  440243 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 01:54:22.685819  440243 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 01:54:22.685831  440243 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 01:54:22.685838  440243 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 01:54:22.685847  440243 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 01:54:22.685892  440243 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 01:54:22.685963  440243 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 01:54:22.686045  440243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 01:54:22.693249  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:54:22.711671  440243 start.go:305] post-start completed in 166.231691ms
	I0325 01:54:22.712014  440243 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220325014921-262786
	I0325 01:54:22.750177  440243 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/config.json ...
	I0325 01:54:22.750387  440243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:54:22.750432  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:22.790925  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49524 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:22.879504  440243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 01:54:22.883837  440243 start.go:130] duration metric: createHost completed in 13.837687327s
	I0325 01:54:22.883866  440243 start.go:81] releasing machines lock for "calico-20220325014921-262786", held for 13.837846611s
	I0325 01:54:22.883958  440243 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220325014921-262786
	I0325 01:54:22.925622  440243 ssh_runner.go:195] Run: systemctl --version
	I0325 01:54:22.925687  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:22.925691  440243 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 01:54:22.925739  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:22.970875  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49524 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:22.973670  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49524 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:23.087780  440243 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 01:54:23.097927  440243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 01:54:23.110694  440243 docker.go:183] disabling docker service ...
	I0325 01:54:23.110747  440243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 01:54:23.134276  440243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 01:54:23.146865  440243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 01:54:23.251750  440243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 01:54:23.382784  440243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 01:54:23.393136  440243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 01:54:23.406642  440243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLnNlcnZpY2UudjEuZGlmZi1zZXJ2aWNlIl0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdjLnYxLnNjaGVkdWxlciJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2R
lbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 01:54:23.421062  440243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 01:54:23.428708  440243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 01:54:23.435352  440243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 01:54:23.528839  440243 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 01:54:23.630080  440243 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 01:54:23.630145  440243 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 01:54:23.634430  440243 start.go:462] Will wait 60s for crictl version
	I0325 01:54:23.634481  440243 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:54:23.680949  440243 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 01:54:23.681004  440243 ssh_runner.go:195] Run: containerd --version
	I0325 01:54:23.708483  440243 ssh_runner.go:195] Run: containerd --version
	I0325 01:54:23.732512  440243 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 01:54:23.732699  440243 cli_runner.go:133] Run: docker network inspect calico-20220325014921-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:54:23.826154  440243 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 01:54:23.830169  440243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:54:23.853737  440243 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:54:23.853817  440243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:54:23.957839  440243 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:54:23.957865  440243 containerd.go:526] Images already preloaded, skipping extraction
	I0325 01:54:23.957915  440243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:54:23.986605  440243 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:54:23.986629  440243 cache_images.go:84] Images are preloaded, skipping loading
	I0325 01:54:23.986670  440243 ssh_runner.go:195] Run: sudo crictl info
	I0325 01:54:24.015203  440243 cni.go:93] Creating CNI manager for "calico"
	I0325 01:54:24.015232  440243 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 01:54:24.015247  440243 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220325014921-262786 NodeName:calico-20220325014921-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 01:54:24.015369  440243 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20220325014921-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 01:54:24.015449  440243 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20220325014921-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:calico-20220325014921-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0325 01:54:24.015494  440243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 01:54:24.025798  440243 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 01:54:24.025878  440243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 01:54:24.034176  440243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (542 bytes)
	I0325 01:54:24.056739  440243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 01:54:24.071872  440243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0325 01:54:24.086131  440243 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 01:54:24.089675  440243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:54:24.100954  440243 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786 for IP: 192.168.49.2
	I0325 01:54:24.101091  440243 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 01:54:24.101152  440243 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 01:54:24.101225  440243 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/client.key
	I0325 01:54:24.101247  440243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/client.crt with IP's: []
	I0325 01:54:24.264197  440243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/client.crt ...
	I0325 01:54:24.264237  440243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/client.crt: {Name:mk033a710c0c1af6fda10d604f47f32df3fe161f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:24.264452  440243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/client.key ...
	I0325 01:54:24.264477  440243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/client.key: {Name:mk91681a86b6e326f3292a6ae43c11481ba8f4e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:24.264621  440243 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.key.dd3b5fb2
	I0325 01:54:24.264642  440243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 01:54:24.372524  440243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.crt.dd3b5fb2 ...
	I0325 01:54:24.372560  440243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.crt.dd3b5fb2: {Name:mk4e1fcd476e2fbef1e552fa355dc0137385b4a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:24.372756  440243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.key.dd3b5fb2 ...
	I0325 01:54:24.372773  440243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.key.dd3b5fb2: {Name:mk40cf369ab29ab8ac6e624dddb5c258b31d2579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:24.372891  440243 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.crt
	I0325 01:54:24.372963  440243 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.key
	I0325 01:54:24.373028  440243 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/proxy-client.key
	I0325 01:54:24.373049  440243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/proxy-client.crt with IP's: []
	I0325 01:54:24.544734  440243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/proxy-client.crt ...
	I0325 01:54:24.544769  440243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/proxy-client.crt: {Name:mk4d2cf1da32ae5f68d5ac848b496d896b8a3f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:24.544941  440243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/proxy-client.key ...
	I0325 01:54:24.544959  440243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/proxy-client.key: {Name:mkbbe258640ee6174dc10df9bdcb2d6a1d1de0a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:24.545156  440243 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 01:54:24.545209  440243 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 01:54:24.545226  440243 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 01:54:24.545265  440243 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 01:54:24.545299  440243 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 01:54:24.545338  440243 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 01:54:24.545394  440243 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:54:24.545996  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 01:54:24.564919  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 01:54:24.583879  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 01:54:24.604897  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220325014921-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 01:54:24.623417  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 01:54:24.640930  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 01:54:24.658761  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 01:54:24.676679  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 01:54:24.694890  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 01:54:24.714371  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 01:54:24.733681  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 01:54:24.752632  440243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 01:54:24.767724  440243 ssh_runner.go:195] Run: openssl version
	I0325 01:54:24.772992  440243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 01:54:24.781527  440243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 01:54:24.785101  440243 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 01:54:24.785146  440243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 01:54:24.790337  440243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 01:54:24.797635  440243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 01:54:24.805263  440243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:54:24.808474  440243 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:54:24.808517  440243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:54:24.813108  440243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 01:54:24.820582  440243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 01:54:24.827563  440243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 01:54:24.830980  440243 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 01:54:24.831039  440243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 01:54:24.835852  440243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 01:54:24.843292  440243 kubeadm.go:391] StartCluster: {Name:calico-20220325014921-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:calico-20220325014921-262786 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false}
	I0325 01:54:24.843400  440243 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 01:54:24.843441  440243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 01:54:24.871610  440243 cri.go:87] found id: ""
	I0325 01:54:24.871673  440243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 01:54:24.879705  440243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 01:54:24.887398  440243 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 01:54:24.887456  440243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 01:54:24.895610  440243 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 01:54:24.895675  440243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 01:54:25.210221  440243 out.go:203]   - Generating certificates and keys ...
	I0325 01:54:27.999537  440243 out.go:203]   - Booting up control plane ...
	I0325 01:54:39.921585  440243 out.go:203]   - Configuring RBAC rules ...
	I0325 01:54:40.334826  440243 cni.go:93] Creating CNI manager for "calico"
	I0325 01:54:40.336649  440243 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0325 01:54:40.336838  440243 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 01:54:40.336856  440243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0325 01:54:40.350553  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 01:54:41.630277  440243 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.279692603s)
	I0325 01:54:41.630327  440243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 01:54:41.630431  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:41.630443  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=calico-20220325014921-262786 minikube.k8s.io/updated_at=2022_03_25T01_54_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:41.637326  440243 ops.go:34] apiserver oom_adj: -16
	I0325 01:54:41.784876  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:42.337068  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:42.837643  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:43.337510  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:43.837221  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:44.336848  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:44.837111  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:45.337681  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:45.836803  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:46.337746  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:46.836837  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:47.336976  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:47.837333  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:48.336798  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:48.836802  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:49.336752  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:49.837805  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:50.337556  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:50.836833  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:51.337719  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:51.837453  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:52.336885  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:52.836979  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:53.337340  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:53.836803  440243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:54:53.892801  440243 kubeadm.go:1020] duration metric: took 12.262425844s to wait for elevateKubeSystemPrivileges.
	I0325 01:54:53.892839  440243 kubeadm.go:393] StartCluster complete in 29.049559189s
	I0325 01:54:53.892862  440243 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:53.892971  440243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:54:53.894268  440243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:54.412072  440243 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220325014921-262786" rescaled to 1
	I0325 01:54:54.412129  440243 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:54:54.414055  440243 out.go:176] * Verifying Kubernetes components...
	I0325 01:54:54.414113  440243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 01:54:54.412202  440243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 01:54:54.412228  440243 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 01:54:54.414209  440243 addons.go:65] Setting storage-provisioner=true in profile "calico-20220325014921-262786"
	I0325 01:54:54.412388  440243 config.go:176] Loaded profile config "calico-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:54:54.414231  440243 addons.go:65] Setting default-storageclass=true in profile "calico-20220325014921-262786"
	I0325 01:54:54.414259  440243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220325014921-262786"
	I0325 01:54:54.414258  440243 addons.go:153] Setting addon storage-provisioner=true in "calico-20220325014921-262786"
	W0325 01:54:54.414272  440243 addons.go:165] addon storage-provisioner should already be in state true
	I0325 01:54:54.414301  440243 host.go:66] Checking if "calico-20220325014921-262786" exists ...
	I0325 01:54:54.414668  440243 cli_runner.go:133] Run: docker container inspect calico-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:54.414876  440243 cli_runner.go:133] Run: docker container inspect calico-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:54.426734  440243 node_ready.go:35] waiting up to 5m0s for node "calico-20220325014921-262786" to be "Ready" ...
	I0325 01:54:54.430491  440243 node_ready.go:49] node "calico-20220325014921-262786" has status "Ready":"True"
	I0325 01:54:54.430516  440243 node_ready.go:38] duration metric: took 3.75251ms waiting for node "calico-20220325014921-262786" to be "Ready" ...
	I0325 01:54:54.430529  440243 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 01:54:54.439655  440243 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace to be "Ready" ...
	I0325 01:54:54.459201  440243 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 01:54:54.459326  440243 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:54:54.459340  440243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 01:54:54.459386  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:54.463255  440243 addons.go:153] Setting addon default-storageclass=true in "calico-20220325014921-262786"
	W0325 01:54:54.463277  440243 addons.go:165] addon default-storageclass should already be in state true
	I0325 01:54:54.463300  440243 host.go:66] Checking if "calico-20220325014921-262786" exists ...
	I0325 01:54:54.463619  440243 cli_runner.go:133] Run: docker container inspect calico-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:54.497470  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49524 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:54.509030  440243 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 01:54:54.509055  440243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 01:54:54.509107  440243 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220325014921-262786
	I0325 01:54:54.511310  440243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 01:54:54.549814  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49524 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:54.601540  440243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:54:54.701835  440243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 01:54:56.390626  440243 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.879279473s)
	I0325 01:54:56.390660  440243 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0325 01:54:56.490124  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:54:56.500077  440243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.898489756s)
	I0325 01:54:56.500179  440243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.798309857s)
	I0325 01:54:56.502349  440243 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 01:54:56.502371  440243 addons.go:417] enableAddons completed in 2.090164752s
	I0325 01:54:58.973463  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:01.450813  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:03.951437  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:06.450673  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:08.791543  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:10.950007  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:12.950870  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:14.951063  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:17.449838  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:19.449885  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:21.450689  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:23.450820  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:25.949935  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:27.951276  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:30.450113  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:32.950401  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:35.450311  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:37.950414  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:39.950851  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:42.450079  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:44.949971  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:46.950388  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:49.450803  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:51.950005  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:54.450376  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:56.949706  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:59.450129  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:01.450808  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:03.950079  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:06.450402  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:08.450482  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:10.950537  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:13.450494  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:15.950800  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:18.450140  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:20.950489  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:23.450799  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:25.950728  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:28.450548  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:30.950428  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:33.449469  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:35.450252  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:37.949875  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:39.950734  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:42.451036  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:44.950967  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:47.450049  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:49.450281  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:51.950064  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:54.450144  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:56.450569  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:58.450825  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:00.950520  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:03.450640  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:05.450918  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:07.950107  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:09.950750  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:11.951314  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:14.450849  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:16.451359  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:18.950702  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:20.950861  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:22.951062  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:25.451476  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:27.950005  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:30.450420  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:32.951294  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:35.450505  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:37.950444  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:40.450506  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:42.450985  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:44.949672  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:46.950297  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:49.450175  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:51.450356  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:53.950613  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:56.449946  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:58.450110  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:00.451216  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:02.950770  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:05.450556  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:07.451055  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:09.949891  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:11.950918  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:14.450791  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:16.949899  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:19.450727  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:21.950126  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:24.450063  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:26.450260  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:28.450830  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:30.950663  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:33.450641  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:35.950344  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:38.450663  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:40.949547  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:42.950881  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:45.450029  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:47.951426  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:50.450644  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:52.949935  440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:54.453764  440243 pod_ready.go:81] duration metric: took 4m0.014071871s waiting for pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace to be "Ready" ...
	E0325 01:58:54.453795  440243 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0325 01:58:54.453817  440243 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-srh8z" in "kube-system" namespace to be "Ready" ...
	I0325 01:58:56.465394  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:58.466164  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:00.466246  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:02.466356  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:04.466551  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:06.965390  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:08.965530  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:10.966031  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:12.966329  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:15.466507  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:17.966052  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:19.966529  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:22.465926  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:24.466219  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:26.466280  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:28.965577  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:31.465336  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:33.965866  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:36.465716  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:38.465892  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:40.966108  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:43.465526  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:45.465729  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:47.967747  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:50.466203  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:52.966050  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:55.466240  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:57.466490  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:59.966109  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:02.465830  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:04.965956  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:06.968356  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:09.466106  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:11.965385  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:13.966907  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:16.466246  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:18.466374  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:20.966050  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:23.466019  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:25.466373  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:27.966783  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:30.466003  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:32.966003  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:35.466050  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:37.966004  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:40.465841  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:42.966235  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:45.465711  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:47.965776  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:49.966476  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:52.466039  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:54.966085  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:56.966359  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:58.967286  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:01.466350  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:03.466445  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:05.966165  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:07.966194  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:10.466003  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:12.466535  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:14.966152  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:17.466878  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:19.966129  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:21.966568  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:24.466160  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:26.966283  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:29.466812  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:31.966698  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:34.467079  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:36.968791  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:39.466847  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:41.966085  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:44.005446  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:46.465927  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:48.466588  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:50.467017  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:52.966207  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:55.466037  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:57.466468  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:59.966253  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:02.466133  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:04.467494  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:06.966470  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:09.466472  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:11.966062  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:14.465813  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:16.466064  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:18.965276  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:20.966250  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:22.966475  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:25.465717  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:27.465851  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:29.466648  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:31.966295  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:34.466164  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:36.966434  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:39.466484  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:41.965546  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:43.966910  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:46.466223  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:48.965487  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:50.966544  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:53.465995  440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:54.471293  440243 pod_ready.go:81] duration metric: took 4m0.017463742s waiting for pod "calico-node-srh8z" in "kube-system" namespace to be "Ready" ...
	E0325 02:02:54.471319  440243 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0325 02:02:54.471336  440243 pod_ready.go:38] duration metric: took 8m0.040793648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:02:54.473559  440243 out.go:176] 
	W0325 02:02:54.473682  440243 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0325 02:02:54.473697  440243 out.go:241] * 
	* 
	W0325 02:02:54.474403  440243 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:02:54.476745  440243 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (525.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (539.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220325014921-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220325014921-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: exit status 105 (8m59.624782419s)

                                                
                                                
-- stdout --
	* [custom-weave-20220325014921-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node custom-weave-20220325014921-262786 in cluster custom-weave-20220325014921-262786
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 01:54:22.491564  442784 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:54:22.491689  442784 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:54:22.491699  442784 out.go:310] Setting ErrFile to fd 2...
	I0325 01:54:22.491705  442784 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:54:22.491855  442784 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:54:22.492187  442784 out.go:304] Setting JSON to false
	I0325 01:54:22.493764  442784 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16335,"bootTime":1648156928,"procs":621,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:54:22.493857  442784 start.go:125] virtualization: kvm guest
	I0325 01:54:22.513694  442784 out.go:176] * [custom-weave-20220325014921-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 01:54:22.513975  442784 notify.go:193] Checking for updates...
	I0325 01:54:22.524002  442784 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 01:54:22.526139  442784 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 01:54:22.527880  442784 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:54:22.529850  442784 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 01:54:22.531902  442784 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 01:54:22.533116  442784 config.go:176] Loaded profile config "calico-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:54:22.533308  442784 config.go:176] Loaded profile config "cilium-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:54:22.533471  442784 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 01:54:22.533525  442784 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 01:54:22.590660  442784 docker.go:136] docker version: linux-20.10.14
	I0325 01:54:22.590778  442784 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:54:22.698870  442784 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 01:54:22.628761785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:54:22.698993  442784 docker.go:253] overlay module found
	I0325 01:54:22.701688  442784 out.go:176] * Using the docker driver based on user configuration
	I0325 01:54:22.701718  442784 start.go:284] selected driver: docker
	I0325 01:54:22.701725  442784 start.go:801] validating driver "docker" against <nil>
	I0325 01:54:22.701747  442784 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 01:54:22.701818  442784 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 01:54:22.701842  442784 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 01:54:22.703243  442784 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 01:54:22.703878  442784 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:54:22.814415  442784 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 01:54:22.741243339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:54:22.814572  442784 start_flags.go:290] no existing cluster config was found, will generate one from the flags 
	I0325 01:54:22.814786  442784 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 01:54:22.814813  442784 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0325 01:54:22.814828  442784 start_flags.go:299] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0325 01:54:22.814847  442784 start_flags.go:304] config:
	{Name:custom-weave-20220325014921-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220325014921-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:54:22.817264  442784 out.go:176] * Starting control plane node custom-weave-20220325014921-262786 in cluster custom-weave-20220325014921-262786
	I0325 01:54:22.817309  442784 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 01:54:22.819133  442784 out.go:176] * Pulling base image ...
	I0325 01:54:22.819168  442784 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:54:22.819198  442784 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 01:54:22.819218  442784 cache.go:57] Caching tarball of preloaded images
	I0325 01:54:22.819300  442784 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 01:54:22.819387  442784 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 01:54:22.819407  442784 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 01:54:22.819510  442784 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/config.json ...
	I0325 01:54:22.819538  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/config.json: {Name:mk107e333cf606cbf7e5164ea7ceeee4cbcf7ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:22.858882  442784 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 01:54:22.858922  442784 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 01:54:22.858946  442784 cache.go:208] Successfully downloaded all kic artifacts
	I0325 01:54:22.859029  442784 start.go:348] acquiring machines lock for custom-weave-20220325014921-262786: {Name:mk98badb4a364d819f3b9a89a3e01ef171aef8e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 01:54:22.859216  442784 start.go:352] acquired machines lock for "custom-weave-20220325014921-262786" in 159.428µs
	I0325 01:54:22.859256  442784 start.go:90] Provisioning new machine with config: &{Name:custom-weave-20220325014921-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220325014921-262786 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:54:22.859385  442784 start.go:127] createHost starting for "" (driver="docker")
	I0325 01:54:22.862032  442784 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0325 01:54:22.862321  442784 start.go:161] libmachine.API.Create for "custom-weave-20220325014921-262786" (driver="docker")
	I0325 01:54:22.862359  442784 client.go:168] LocalClient.Create starting
	I0325 01:54:22.862442  442784 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 01:54:22.862479  442784 main.go:130] libmachine: Decoding PEM data...
	I0325 01:54:22.862503  442784 main.go:130] libmachine: Parsing certificate...
	I0325 01:54:22.862588  442784 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 01:54:22.862612  442784 main.go:130] libmachine: Decoding PEM data...
	I0325 01:54:22.862630  442784 main.go:130] libmachine: Parsing certificate...
	I0325 01:54:22.863093  442784 cli_runner.go:133] Run: docker network inspect custom-weave-20220325014921-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 01:54:22.899829  442784 cli_runner.go:180] docker network inspect custom-weave-20220325014921-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 01:54:22.899902  442784 network_create.go:254] running [docker network inspect custom-weave-20220325014921-262786] to gather additional debugging logs...
	I0325 01:54:22.899931  442784 cli_runner.go:133] Run: docker network inspect custom-weave-20220325014921-262786
	W0325 01:54:22.944263  442784 cli_runner.go:180] docker network inspect custom-weave-20220325014921-262786 returned with exit code 1
	I0325 01:54:22.944302  442784 network_create.go:257] error running [docker network inspect custom-weave-20220325014921-262786]: docker network inspect custom-weave-20220325014921-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220325014921-262786
	I0325 01:54:22.944318  442784 network_create.go:259] output of [docker network inspect custom-weave-20220325014921-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220325014921-262786
	
	** /stderr **
	I0325 01:54:22.944371  442784 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:54:22.985263  442784 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcb21d43dbbf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:db:45:ae:c5}}
	I0325 01:54:22.985846  442784 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-d34e3e685caf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:1e:a2:00:d3}}
	I0325 01:54:22.986535  442784 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0001ac618] misses:0}
	I0325 01:54:22.986580  442784 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 01:54:22.986592  442784 network_create.go:106] attempt to create docker network custom-weave-20220325014921-262786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0325 01:54:22.986655  442784 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220325014921-262786
	I0325 01:54:23.062558  442784 network_create.go:90] docker network custom-weave-20220325014921-262786 192.168.67.0/24 created
	I0325 01:54:23.062603  442784 kic.go:106] calculated static IP "192.168.67.2" for the "custom-weave-20220325014921-262786" container
	I0325 01:54:23.062666  442784 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 01:54:23.100680  442784 cli_runner.go:133] Run: docker volume create custom-weave-20220325014921-262786 --label name.minikube.sigs.k8s.io=custom-weave-20220325014921-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 01:54:23.144491  442784 oci.go:102] Successfully created a docker volume custom-weave-20220325014921-262786
	I0325 01:54:23.144585  442784 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220325014921-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220325014921-262786 --entrypoint /usr/bin/test -v custom-weave-20220325014921-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 01:54:23.959661  442784 oci.go:106] Successfully prepared a docker volume custom-weave-20220325014921-262786
	I0325 01:54:23.959706  442784 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:54:23.959729  442784 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 01:54:23.959804  442784 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220325014921-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 01:54:33.447356  442784 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220325014921-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (9.487517072s)
	I0325 01:54:33.447387  442784 kic.go:188] duration metric: took 9.487656 seconds to extract preloaded images to volume
	W0325 01:54:33.447422  442784 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 01:54:33.447432  442784 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 01:54:33.447493  442784 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 01:54:33.542522  442784 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220325014921-262786 --name custom-weave-20220325014921-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220325014921-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220325014921-262786 --network custom-weave-20220325014921-262786 --ip 192.168.67.2 --volume custom-weave-20220325014921-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0325 01:54:33.932813  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Running}}
	I0325 01:54:33.972793  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:34.021148  442784 cli_runner.go:133] Run: docker exec custom-weave-20220325014921-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 01:54:34.129785  442784 oci.go:281] the created container "custom-weave-20220325014921-262786" has a running status.
	I0325 01:54:34.129846  442784 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa...
	I0325 01:54:34.358995  442784 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 01:54:34.477891  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:34.541119  442784 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 01:54:34.541142  442784 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220325014921-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 01:54:34.649271  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:54:34.684487  442784 machine.go:88] provisioning docker machine ...
	I0325 01:54:34.684534  442784 ubuntu.go:169] provisioning hostname "custom-weave-20220325014921-262786"
	I0325 01:54:34.684597  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:54:34.728553  442784 main.go:130] libmachine: Using SSH client type: native
	I0325 01:54:34.728784  442784 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49529 <nil> <nil>}
	I0325 01:54:34.728815  442784 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220325014921-262786 && echo "custom-weave-20220325014921-262786" | sudo tee /etc/hostname
	I0325 01:54:34.863440  442784 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20220325014921-262786
	
	I0325 01:54:34.863515  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:54:34.905817  442784 main.go:130] libmachine: Using SSH client type: native
	I0325 01:54:34.906020  442784 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49529 <nil> <nil>}
	I0325 01:54:34.906049  442784 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220325014921-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220325014921-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220325014921-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 01:54:35.034819  442784 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 01:54:35.034860  442784 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 01:54:35.034892  442784 ubuntu.go:177] setting up certificates
	I0325 01:54:35.034908  442784 provision.go:83] configureAuth start
	I0325 01:54:35.034995  442784 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220325014921-262786
	I0325 01:54:35.079990  442784 provision.go:138] copyHostCerts
	I0325 01:54:35.080065  442784 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 01:54:35.080081  442784 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 01:54:35.080164  442784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 01:54:35.080263  442784 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 01:54:35.080281  442784 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 01:54:35.080313  442784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 01:54:35.080370  442784 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 01:54:35.080380  442784 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 01:54:35.080403  442784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 01:54:35.080449  442784 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220325014921-262786 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220325014921-262786]
	I0325 01:54:35.168408  442784 provision.go:172] copyRemoteCerts
	I0325 01:54:35.168476  442784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 01:54:35.168523  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:54:35.212553  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:35.303879  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 01:54:35.327013  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0325 01:54:35.347320  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 01:54:35.365123  442784 provision.go:86] duration metric: configureAuth took 330.200853ms
	I0325 01:54:35.365160  442784 ubuntu.go:193] setting minikube options for container-runtime
	I0325 01:54:35.365340  442784 config.go:176] Loaded profile config "custom-weave-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:54:35.365356  442784 machine.go:91] provisioned docker machine in 680.840816ms
	I0325 01:54:35.365365  442784 client.go:171] LocalClient.Create took 12.502994194s
	I0325 01:54:35.365389  442784 start.go:169] duration metric: libmachine.API.Create for "custom-weave-20220325014921-262786" took 12.503069327s
	I0325 01:54:35.365412  442784 start.go:302] post-start starting for "custom-weave-20220325014921-262786" (driver="docker")
	I0325 01:54:35.365424  442784 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 01:54:35.365482  442784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 01:54:35.365534  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:54:35.409143  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:35.508590  442784 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 01:54:35.511951  442784 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 01:54:35.511983  442784 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 01:54:35.511999  442784 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 01:54:35.512006  442784 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 01:54:35.512017  442784 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 01:54:35.512077  442784 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 01:54:35.512164  442784 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 01:54:35.512274  442784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 01:54:35.520608  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:54:35.543476  442784 start.go:305] post-start completed in 178.043905ms
	I0325 01:54:35.543845  442784 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220325014921-262786
	I0325 01:54:35.578590  442784 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/config.json ...
	I0325 01:54:35.578806  442784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:54:35.578847  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:54:35.626176  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:35.716631  442784 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 01:54:35.722558  442784 start.go:130] duration metric: createHost completed in 12.863156547s
	I0325 01:54:35.722586  442784 start.go:81] releasing machines lock for "custom-weave-20220325014921-262786", held for 12.863351387s
	I0325 01:54:35.722677  442784 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220325014921-262786
	I0325 01:54:35.762528  442784 ssh_runner.go:195] Run: systemctl --version
	I0325 01:54:35.762571  442784 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 01:54:35.762598  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:54:35.762635  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:54:35.810833  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:35.811038  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:54:35.922362  442784 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 01:54:35.939395  442784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 01:54:35.955640  442784 docker.go:183] disabling docker service ...
	I0325 01:54:35.955701  442784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 01:54:35.972001  442784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 01:54:35.980889  442784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 01:54:36.089540  442784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 01:54:36.186987  442784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 01:54:36.198171  442784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 01:54:36.217173  442784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLnNlcnZpY2UudjEuZGlmZi1zZXJ2aWNlIl0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdjLnYxLnNjaGVkdWxlciJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2R
lbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 01:54:36.236766  442784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 01:54:36.245030  442784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 01:54:36.252936  442784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 01:54:36.339262  442784 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 01:54:36.426441  442784 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 01:54:36.426519  442784 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 01:54:36.431100  442784 start.go:462] Will wait 60s for crictl version
	I0325 01:54:36.431164  442784 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:54:36.469071  442784 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T01:54:36Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 01:54:47.516239  442784 ssh_runner.go:195] Run: sudo crictl version
	I0325 01:54:47.539787  442784 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 01:54:47.539848  442784 ssh_runner.go:195] Run: containerd --version
	I0325 01:54:47.559791  442784 ssh_runner.go:195] Run: containerd --version
	I0325 01:54:47.581908  442784 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 01:54:47.581994  442784 cli_runner.go:133] Run: docker network inspect custom-weave-20220325014921-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 01:54:47.612850  442784 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0325 01:54:47.616178  442784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:54:47.625685  442784 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:54:47.625748  442784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:54:47.658137  442784 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:54:47.658167  442784 containerd.go:526] Images already preloaded, skipping extraction
	I0325 01:54:47.658230  442784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 01:54:47.682641  442784 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 01:54:47.682662  442784 cache_images.go:84] Images are preloaded, skipping loading
	I0325 01:54:47.682711  442784 ssh_runner.go:195] Run: sudo crictl info
	I0325 01:54:47.704917  442784 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0325 01:54:47.704952  442784 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 01:54:47.704967  442784 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220325014921-262786 NodeName:custom-weave-20220325014921-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 01:54:47.705084  442784 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "custom-weave-20220325014921-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 01:54:47.705173  442784 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=custom-weave-20220325014921-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220325014921-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0325 01:54:47.705217  442784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 01:54:47.711980  442784 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 01:54:47.712055  442784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 01:54:47.718557  442784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0325 01:54:47.730222  442784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 01:54:47.742468  442784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I0325 01:54:47.754849  442784 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0325 01:54:47.757680  442784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 01:54:47.766998  442784 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786 for IP: 192.168.67.2
	I0325 01:54:47.767099  442784 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 01:54:47.767144  442784 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 01:54:47.767194  442784 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/client.key
	I0325 01:54:47.767209  442784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/client.crt with IP's: []
	I0325 01:54:48.064734  442784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/client.crt ...
	I0325 01:54:48.064773  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/client.crt: {Name:mkfbeba75127210abdd67641f3fa57a7d20f7cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:48.064961  442784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/client.key ...
	I0325 01:54:48.064979  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/client.key: {Name:mk669cc027721eceda885695066ddafd18236224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:48.065077  442784 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.key.c7fa3a9e
	I0325 01:54:48.065091  442784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 01:54:48.223603  442784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.crt.c7fa3a9e ...
	I0325 01:54:48.223634  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.crt.c7fa3a9e: {Name:mkf80f5609cd953ad98e88544f2141128419ed5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:48.223816  442784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.key.c7fa3a9e ...
	I0325 01:54:48.223830  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.key.c7fa3a9e: {Name:mk62b57db76c1fade6fa38d0c0ea5853eedbc4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:48.223935  442784 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.crt
	I0325 01:54:48.223988  442784 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.key
	I0325 01:54:48.224032  442784 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/proxy-client.key
	I0325 01:54:48.224046  442784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/proxy-client.crt with IP's: []
	I0325 01:54:48.315873  442784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/proxy-client.crt ...
	I0325 01:54:48.315906  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/proxy-client.crt: {Name:mk3fd2310bc935b040962bd4ed697d7d35fc4c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:48.316106  442784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/proxy-client.key ...
	I0325 01:54:48.316130  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/proxy-client.key: {Name:mk0ede8bab546529f01b3d40c763618345770c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:54:48.316357  442784 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 01:54:48.316406  442784 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 01:54:48.316425  442784 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 01:54:48.316455  442784 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 01:54:48.316485  442784 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 01:54:48.316521  442784 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 01:54:48.316592  442784 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 01:54:48.317331  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 01:54:48.337362  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 01:54:48.356102  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 01:54:48.372824  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220325014921-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0325 01:54:48.392654  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 01:54:48.412842  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 01:54:48.432566  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 01:54:48.455606  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 01:54:48.474389  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 01:54:48.491086  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 01:54:48.509691  442784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 01:54:48.529033  442784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 01:54:48.542701  442784 ssh_runner.go:195] Run: openssl version
	I0325 01:54:48.547611  442784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 01:54:48.555494  442784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 01:54:48.558736  442784 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 01:54:48.558790  442784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 01:54:48.564102  442784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 01:54:48.571494  442784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 01:54:48.578584  442784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 01:54:48.581356  442784 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 01:54:48.581389  442784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 01:54:48.585864  442784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 01:54:48.594268  442784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 01:54:48.602734  442784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:54:48.605924  442784 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:54:48.605981  442784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 01:54:48.611281  442784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 01:54:48.619481  442784 kubeadm.go:391] StartCluster: {Name:custom-weave-20220325014921-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220325014921-262786 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false}
	I0325 01:54:48.619587  442784 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 01:54:48.619622  442784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 01:54:48.648921  442784 cri.go:87] found id: ""
	I0325 01:54:48.648987  442784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 01:54:48.656536  442784 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 01:54:48.663752  442784 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 01:54:48.663806  442784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 01:54:48.670442  442784 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 01:54:48.670479  442784 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 01:55:04.932050  442784 out.go:203]   - Generating certificates and keys ...
	I0325 01:55:04.936166  442784 out.go:203]   - Booting up control plane ...
	I0325 01:55:04.939871  442784 out.go:203]   - Configuring RBAC rules ...
	I0325 01:55:04.942250  442784 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0325 01:55:04.944398  442784 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0325 01:55:04.944466  442784 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 01:55:04.944515  442784 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0325 01:55:04.948418  442784 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0325 01:55:04.948452  442784 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0325 01:55:04.974787  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 01:55:05.905700  442784 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 01:55:05.905784  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:05.905796  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=custom-weave-20220325014921-262786 minikube.k8s.io/updated_at=2022_03_25T01_55_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:05.913279  442784 ops.go:34] apiserver oom_adj: -16
	I0325 01:55:06.350587  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:06.946231  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:07.446253  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:07.946635  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:08.446097  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:08.945803  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:09.446739  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:09.946451  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:10.445869  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:10.945907  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:11.446001  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:11.946712  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:12.446459  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:12.945864  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:13.445997  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:13.945906  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:14.446100  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:14.946258  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:15.446416  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:15.946743  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:16.445790  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:16.946757  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:17.445866  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:17.945939  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:18.446030  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:18.946679  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:19.445913  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:19.946715  442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 01:55:20.003561  442784 kubeadm.go:1020] duration metric: took 14.097833739s to wait for elevateKubeSystemPrivileges.
	I0325 01:55:20.003591  442784 kubeadm.go:393] StartCluster complete in 31.384120335s
	I0325 01:55:20.003609  442784 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:20.003709  442784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:55:20.004656  442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 01:55:20.519233  442784 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220325014921-262786" rescaled to 1
	I0325 01:55:20.519302  442784 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 01:55:20.521500  442784 out.go:176] * Verifying Kubernetes components...
	I0325 01:55:20.519380  442784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 01:55:20.521565  442784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 01:55:20.519622  442784 config.go:176] Loaded profile config "custom-weave-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:55:20.519397  442784 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 01:55:20.521700  442784 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220325014921-262786"
	I0325 01:55:20.521715  442784 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220325014921-262786"
	W0325 01:55:20.521720  442784 addons.go:165] addon storage-provisioner should already be in state true
	I0325 01:55:20.521747  442784 host.go:66] Checking if "custom-weave-20220325014921-262786" exists ...
	I0325 01:55:20.522052  442784 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220325014921-262786"
	I0325 01:55:20.522077  442784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220325014921-262786"
	I0325 01:55:20.522294  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:55:20.522407  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:55:20.564684  442784 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220325014921-262786"
	W0325 01:55:20.564711  442784 addons.go:165] addon default-storageclass should already be in state true
	I0325 01:55:20.564734  442784 host.go:66] Checking if "custom-weave-20220325014921-262786" exists ...
	I0325 01:55:20.565142  442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
	I0325 01:55:20.567523  442784 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 01:55:20.567643  442784 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:55:20.567660  442784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 01:55:20.567697  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:55:20.601389  442784 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 01:55:20.601417  442784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 01:55:20.601486  442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
	I0325 01:55:20.603466  442784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 01:55:20.603729  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:55:20.604592  442784 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220325014921-262786" to be "Ready" ...
	I0325 01:55:20.607815  442784 node_ready.go:49] node "custom-weave-20220325014921-262786" has status "Ready":"True"
	I0325 01:55:20.607832  442784 node_ready.go:38] duration metric: took 3.210454ms waiting for node "custom-weave-20220325014921-262786" to be "Ready" ...
	I0325 01:55:20.607840  442784 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 01:55:20.616184  442784 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-qsk2c" in "kube-system" namespace to be "Ready" ...
	I0325 01:55:20.644527  442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
	I0325 01:55:20.708612  442784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 01:55:20.800646  442784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 01:55:20.994686  442784 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0325 01:55:21.217641  442784 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 01:55:21.217669  442784 addons.go:417] enableAddons completed in 698.281954ms
	I0325 01:55:22.627827  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:25.126871  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:27.128372  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:29.128431  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:31.627540  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:34.127415  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:36.128099  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:38.627308  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:40.627890  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:43.127776  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:45.128206  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:47.626880  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:49.627246  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:51.628191  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:54.129214  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:56.129479  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:55:58.627187  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:00.627532  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:02.627738  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:05.126638  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:07.126933  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:09.127472  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:11.627092  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:13.627515  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:16.127205  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:18.128126  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:20.128197  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:22.627521  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:24.628218  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:27.128007  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:29.626998  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:32.127383  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:34.128040  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:36.627385  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:39.128737  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:41.626936  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:43.627468  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:46.128274  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:48.627787  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:51.128134  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:53.628216  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:56.127805  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:56:58.128581  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:00.627978  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:03.128282  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:05.627062  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:07.627148  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:09.627800  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:11.628985  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:14.127368  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:16.128349  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:18.627452  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:21.127596  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:23.627677  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:25.629078  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:28.127454  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:30.127857  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:32.627061  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:34.627281  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:36.627716  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:38.628044  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:41.128288  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:43.627437  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:45.629027  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:48.127262  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:50.627821  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:52.628171  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:54.629330  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:57.127108  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:57:59.127720  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:01.627485  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:04.127661  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:06.127944  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:08.627986  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:11.127221  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:13.127386  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:15.628308  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:18.127661  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:20.627067  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:22.627669  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:25.127702  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:27.627188  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:29.627870  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:32.127319  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:34.127627  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:36.128027  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:38.128157  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:40.627116  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:42.627366  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:44.627901  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:47.127092  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:49.127972  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:51.627645  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:54.128948  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:56.627956  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:58:59.131509  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:01.626847  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:03.627991  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:06.128421  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:08.627165  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:10.628040  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:13.127579  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:15.127811  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:17.628001  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:20.127516  442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:20.631571  442784 pod_ready.go:81] duration metric: took 4m0.015353412s waiting for pod "coredns-64897985d-qsk2c" in "kube-system" namespace to be "Ready" ...
	E0325 01:59:20.631596  442784 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0325 01:59:20.631606  442784 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-x2v4t" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.633133  442784 pod_ready.go:97] error getting pod "coredns-64897985d-x2v4t" in "kube-system" namespace (skipping!): pods "coredns-64897985d-x2v4t" not found
	I0325 01:59:20.633152  442784 pod_ready.go:81] duration metric: took 1.540051ms waiting for pod "coredns-64897985d-x2v4t" in "kube-system" namespace to be "Ready" ...
	E0325 01:59:20.633160  442784 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-x2v4t" in "kube-system" namespace (skipping!): pods "coredns-64897985d-x2v4t" not found
	I0325 01:59:20.633166  442784 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.637747  442784 pod_ready.go:92] pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:20.637768  442784 pod_ready.go:81] duration metric: took 4.596316ms waiting for pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.637780  442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.642175  442784 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:20.642191  442784 pod_ready.go:81] duration metric: took 4.404746ms waiting for pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.642200  442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.825032  442784 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:20.825054  442784 pod_ready.go:81] duration metric: took 182.848289ms waiting for pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:20.825064  442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-zv4v5" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:21.225297  442784 pod_ready.go:92] pod "kube-proxy-zv4v5" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:21.225318  442784 pod_ready.go:81] duration metric: took 400.248182ms waiting for pod "kube-proxy-zv4v5" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:21.225330  442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:21.625682  442784 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 01:59:21.625709  442784 pod_ready.go:81] duration metric: took 400.371185ms waiting for pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:21.625721  442784 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-fm6bn" in "kube-system" namespace to be "Ready" ...
	I0325 01:59:24.032172  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:26.531791  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:29.031481  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:31.531040  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:33.532410  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:36.031682  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:38.530826  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:40.531743  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:43.031806  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:45.531386  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:47.531588  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:49.531656  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:52.031683  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:54.032187  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:56.531093  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 01:59:58.531417  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:00.531966  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:03.031649  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:05.531282  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:07.531455  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:09.531699  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:12.032938  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:14.531694  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:16.531949  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:19.031660  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:21.531699  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:24.032516  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:26.531380  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:29.030968  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:31.031214  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:33.531559  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:36.031302  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:38.031823  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:40.531380  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:43.031453  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:45.031822  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:47.531831  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:50.032302  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:52.531720  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:55.031662  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:00:57.531581  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:00.030994  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:02.031443  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:04.031865  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:06.031960  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:08.032116  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:10.532240  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:13.031864  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:15.531085  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:17.532033  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:20.031731  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:22.034273  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:24.531669  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:26.531851  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:29.031930  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:31.032144  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:33.532441  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:36.030911  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:38.032668  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:40.531381  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:43.031185  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:45.031474  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:47.531739  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:50.031753  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:52.033806  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:54.531253  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:56.531596  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:01:59.031624  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:01.532128  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:04.031790  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:06.531276  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:08.531499  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:11.032333  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:13.032750  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:15.531147  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:17.531994  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:19.532123  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:22.031701  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:24.032439  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:26.531529  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:29.032351  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:31.032527  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:33.531913  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:35.532732  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:38.031866  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:40.031973  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:42.532018  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:45.032609  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:47.532521  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:50.031692  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:52.032309  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:54.531765  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:56.532252  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:02:59.031607  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:01.032024  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:03.531780  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:06.031451  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:08.031594  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:10.031910  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:12.032100  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:14.034095  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:16.531592  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:19.031831  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:21.531133  442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
	I0325 02:03:22.035410  442784 pod_ready.go:81] duration metric: took 4m0.40967505s waiting for pod "weave-net-fm6bn" in "kube-system" namespace to be "Ready" ...
	E0325 02:03:22.035433  442784 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0325 02:03:22.035438  442784 pod_ready.go:38] duration metric: took 8m1.427588624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:03:22.035459  442784 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:03:22.037971  442784 out.go:176] 
	W0325 02:03:22.038090  442784 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0325 02:03:22.038172  442784 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0325 02:03:22.038187  442784 out.go:241] * Related issues:
	* Related issues:
	W0325 02:03:22.038230  442784 out.go:241]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0325 02:03:22.038293  442784 out.go:241]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0325 02:03:22.040086  442784 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (539.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (306.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13009808s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121004024s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134221848s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128014747s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124924542s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123845531s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.116113342s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 01:58:47.791610  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130137726s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0325 01:58:56.094329  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:56.099603  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:56.109861  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:56.130134  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:56.170433  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:56.250794  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:56.411205  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:56.732309  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:57.373259  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:58:58.653539  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:59:01.214370  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:59:03.463098  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 01:59:06.334908  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:59:16.575737  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.115819775s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 01:59:37.056836  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 01:59:40.295120  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:40.300423  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:40.310693  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:40.331016  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:40.371295  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:40.451698  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:40.612107  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:40.932672  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:41.573775  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:42.854495  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:45.415438  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 01:59:50.536543  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12758057s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0325 02:00:00.777545  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 02:00:18.017784  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 02:00:21.258601  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.118690143s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0325 02:01:00.415836  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 02:01:02.218882  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 02:01:05.105222  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122595823s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kindnet/DNS (306.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (484.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [59ddff84-7810-4da2-aeca-9dc7b7afd82b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: ***** TestStartStop/group/old-k8s-version/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
start_stop_delete_test.go:181: TestStartStop/group/old-k8s-version/serial/DeployApp: showing logs for failed pods as of 2022-03-25 02:09:28.883565981 +0000 UTC m=+3093.209694972
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 describe po busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context old-k8s-version-20220325015306-262786 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ltrfn (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
default-token-ltrfn:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-ltrfn
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  8m                     default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning  FailedScheduling  5m23s (x1 over 6m53s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 logs busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context old-k8s-version-20220325015306-262786 logs busybox -n default:
start_stop_delete_test.go:181: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220325015306-262786
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220325015306-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b",
	        "Created": "2022-03-25T01:56:43.297059247Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 457693,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T01:56:43.655669688Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hosts",
	        "LogPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b-json.log",
	        "Name": "/old-k8s-version-20220325015306-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220325015306-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220325015306-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220325015306-262786",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220325015306-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220325015306-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44b9519d0b55a0dbe9bc349c627da03ca1d456aab29fe1f9cc6fbe902a60b4e0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49539"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49535"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49537"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49536"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44b9519d0b55",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220325015306-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e6a4c0e8f4c7",
	                        "old-k8s-version-20220325015306-262786"
	                    ],
	                    "NetworkID": "739cf1dc095b5d758dfcb21f6f999d4a170c6b33046de4a26204586f05d2d4a4",
	                    "EndpointID": "f17636c1e1855543cb0356e0ced5eac0102a5fed579cb886a1c3e850498bc7d7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220325015306-262786 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20220325015306-262786 logs -n 25: (1.049960771s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | running-upgrade-20220325014921-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:37 UTC | Fri, 25 Mar 2022 01:54:11 UTC |
	|         | running-upgrade-20220325014921-262786             |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| delete  | -p                                                | running-upgrade-20220325014921-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:11 UTC | Fri, 25 Mar 2022 01:54:22 UTC |
	|         | running-upgrade-20220325014921-262786             |                                          |         |         |                               |                               |
	| start   | -p                                                | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:09 UTC | Fri, 25 Mar 2022 01:54:40 UTC |
	|         | cilium-20220325014921-262786                      |                                          |         |         |                               |                               |
	|         | --memory=2048                                     |                                          |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                          |         |         |                               |                               |
	|         | --cni=cilium --driver=docker                      |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| ssh     | -p                                                | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:45 UTC | Fri, 25 Mar 2022 01:54:45 UTC |
	|         | cilium-20220325014921-262786                      |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                          |         |         |                               |                               |
	| delete  | -p                                                | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:57 UTC | Fri, 25 Mar 2022 01:55:00 UTC |
	|         | cilium-20220325014921-262786                      |                                          |         |         |                               |                               |
	| start   | -p                                                | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:55:00 UTC | Fri, 25 Mar 2022 01:56:12 UTC |
	|         | kindnet-20220325014920-262786                     |                                          |         |         |                               |                               |
	|         | --memory=2048                                     |                                          |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                          |         |         |                               |                               |
	|         | --cni=kindnet --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| ssh     | -p                                                | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:56:17 UTC | Fri, 25 Mar 2022 01:56:17 UTC |
	|         | kindnet-20220325014920-262786                     |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                          |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786             | old-k8s-version-20220325015306-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:26 UTC | Fri, 25 Mar 2022 02:01:27 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| -p      | kindnet-20220325014920-262786                     | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:33 UTC | Fri, 25 Mar 2022 02:01:34 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| delete  | -p                                                | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:34 UTC | Fri, 25 Mar 2022 02:01:37 UTC |
	|         | kindnet-20220325014920-262786                     |                                          |         |         |                               |                               |
	| start   | -p                                                | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:37 UTC | Fri, 25 Mar 2022 02:02:36 UTC |
	|         | enable-default-cni-20220325014920-262786          |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                          |         |         |                               |                               |
	|         | --enable-default-cni=true                         |                                          |         |         |                               |                               |
	|         | --driver=docker                                   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| ssh     | -p                                                | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:37 UTC | Fri, 25 Mar 2022 02:02:37 UTC |
	|         | enable-default-cni-20220325014920-262786          |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                          |         |         |                               |                               |
	| -p      | calico-20220325014921-262786                      | calico-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:55 UTC | Fri, 25 Mar 2022 02:02:55 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| delete  | -p                                                | calico-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:56 UTC | Fri, 25 Mar 2022 02:02:59 UTC |
	|         | calico-20220325014921-262786                      |                                          |         |         |                               |                               |
	| -p      | custom-weave-20220325014921-262786                | custom-weave-20220325014921-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:22 UTC | Fri, 25 Mar 2022 02:03:23 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| delete  | -p                                                | custom-weave-20220325014921-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:24 UTC | Fri, 25 Mar 2022 02:03:26 UTC |
	|         | custom-weave-20220325014921-262786                |                                          |         |         |                               |                               |
	| start   | -p                                                | bridge-20220325014920-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:59 UTC | Fri, 25 Mar 2022 02:03:56 UTC |
	|         | bridge-20220325014920-262786                      |                                          |         |         |                               |                               |
	|         | --memory=2048                                     |                                          |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                          |         |         |                               |                               |
	|         | --cni=bridge --driver=docker                      |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| ssh     | -p                                                | bridge-20220325014920-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:57 UTC | Fri, 25 Mar 2022 02:03:57 UTC |
	|         | bridge-20220325014920-262786                      |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                          |         |         |                               |                               |
	| -p      | enable-default-cni-20220325014920-262786          | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:38 UTC | Fri, 25 Mar 2022 02:07:39 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| delete  | -p                                                | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:40 UTC | Fri, 25 Mar 2022 02:07:43 UTC |
	|         | enable-default-cni-20220325014920-262786          |                                          |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                  | no-preload-20220325020326-262786         | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:18 UTC | Fri, 25 Mar 2022 02:08:19 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20220325020743-262786        | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:43 UTC | Fri, 25 Mar 2022 02:08:42 UTC |
	|         | embed-certs-20220325020743-262786                 |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                          |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                          |         |         |                               |                               |
	|         | --driver=docker                                   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                      |                                          |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20220325020743-262786        | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:51 UTC | Fri, 25 Mar 2022 02:08:52 UTC |
	|         | embed-certs-20220325020743-262786                 |                                          |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                          |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                          |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20220325020743-262786        | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:52 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                 |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                          |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20220325020743-262786        | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                 |                                          |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                          |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:09:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:09:12.731493  493081 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:09:12.731685  493081 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:12.731712  493081 out.go:310] Setting ErrFile to fd 2...
	I0325 02:09:12.731719  493081 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:12.731861  493081 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:09:12.732145  493081 out.go:304] Setting JSON to false
	I0325 02:09:12.733394  493081 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17225,"bootTime":1648156928,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:09:12.733483  493081 start.go:125] virtualization: kvm guest
	I0325 02:09:12.735978  493081 out.go:176] * [embed-certs-20220325020743-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:09:12.738166  493081 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:09:12.736167  493081 notify.go:193] Checking for updates...
	I0325 02:09:12.739947  493081 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:09:12.741737  493081 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:09:12.743594  493081 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:09:12.745629  493081 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:09:12.746853  493081 config.go:176] Loaded profile config "embed-certs-20220325020743-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:09:12.747516  493081 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:09:12.791102  493081 docker.go:136] docker version: linux-20.10.14
	I0325 02:09:12.791253  493081 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:12.893308  493081 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:12.82508639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:09:12.893455  493081 docker.go:253] overlay module found
	I0325 02:09:12.897256  493081 out.go:176] * Using the docker driver based on existing profile
	I0325 02:09:12.897291  493081 start.go:284] selected driver: docker
	I0325 02:09:12.897298  493081 start.go:801] validating driver "docker" against &{Name:embed-certs-20220325020743-262786 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:embed-certs-20220325020743-262786 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_p
ods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:12.897394  493081 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:09:12.897430  493081 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:09:12.897453  493081 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:09:12.899073  493081 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:09:12.899673  493081 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:12.995845  493081 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:12.930743245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:09:12.996002  493081 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:09:12.996024  493081 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:09:12.998609  493081 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:09:12.998721  493081 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:09:12.998746  493081 cni.go:93] Creating CNI manager for ""
	I0325 02:09:12.998758  493081 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:09:12.998772  493081 start_flags.go:304] config:
	{Name:embed-certs-20220325020743-262786 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:embed-certs-20220325020743-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:13.001763  493081 out.go:176] * Starting control plane node embed-certs-20220325020743-262786 in cluster embed-certs-20220325020743-262786
	I0325 02:09:13.001805  493081 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:09:13.003578  493081 out.go:176] * Pulling base image ...
	I0325 02:09:13.003622  493081 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:09:13.003658  493081 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:09:13.003677  493081 cache.go:57] Caching tarball of preloaded images
	I0325 02:09:13.003753  493081 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:09:13.003920  493081 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:09:13.003937  493081 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:09:13.004068  493081 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/config.json ...
	I0325 02:09:13.042489  493081 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:09:13.042529  493081 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:09:13.042548  493081 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:09:13.042621  493081 start.go:348] acquiring machines lock for embed-certs-20220325020743-262786: {Name:mk09b5bda74ca4ab49b97f5fa7fb6add6f27caec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:09:13.042750  493081 start.go:352] acquired machines lock for "embed-certs-20220325020743-262786" in 104.726µs
	I0325 02:09:13.042776  493081 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:09:13.042786  493081 fix.go:55] fixHost starting: 
	I0325 02:09:13.043087  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:09:13.078477  493081 fix.go:108] recreateIfNeeded on embed-certs-20220325020743-262786: state=Stopped err=<nil>
	W0325 02:09:13.078516  493081 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:09:13.082309  493081 out.go:176] * Restarting existing docker container for "embed-certs-20220325020743-262786" ...
	I0325 02:09:13.082389  493081 cli_runner.go:133] Run: docker start embed-certs-20220325020743-262786
	I0325 02:09:13.478445  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:09:13.515869  493081 kic.go:420] container "embed-certs-20220325020743-262786" state is running.
	I0325 02:09:13.516313  493081 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:09:13.552451  493081 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/config.json ...
	I0325 02:09:13.552659  493081 machine.go:88] provisioning docker machine ...
	I0325 02:09:13.552691  493081 ubuntu.go:169] provisioning hostname "embed-certs-20220325020743-262786"
	I0325 02:09:13.552750  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:13.588753  493081 main.go:130] libmachine: Using SSH client type: native
	I0325 02:09:13.588959  493081 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49564 <nil> <nil>}
	I0325 02:09:13.588977  493081 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220325020743-262786 && echo "embed-certs-20220325020743-262786" | sudo tee /etc/hostname
	I0325 02:09:13.589627  493081 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46458->127.0.0.1:49564: read: connection reset by peer
	I0325 02:09:16.724873  493081 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20220325020743-262786
	
	I0325 02:09:16.724982  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:16.760271  493081 main.go:130] libmachine: Using SSH client type: native
	I0325 02:09:16.760424  493081 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49564 <nil> <nil>}
	I0325 02:09:16.760449  493081 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220325020743-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220325020743-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220325020743-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:09:16.878846  493081 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:09:16.878884  493081 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:09:16.878934  493081 ubuntu.go:177] setting up certificates
	I0325 02:09:16.878971  493081 provision.go:83] configureAuth start
	I0325 02:09:16.879049  493081 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:09:16.913630  493081 provision.go:138] copyHostCerts
	I0325 02:09:16.913708  493081 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:09:16.913724  493081 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:09:16.913805  493081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:09:16.913931  493081 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:09:16.913949  493081 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:09:16.913985  493081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:09:16.914069  493081 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:09:16.914079  493081 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:09:16.914110  493081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:09:16.914200  493081 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220325020743-262786 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220325020743-262786]
	I0325 02:09:17.051577  493081 provision.go:172] copyRemoteCerts
	I0325 02:09:17.051652  493081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:09:17.051694  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.087385  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.174269  493081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:09:17.191695  493081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0325 02:09:17.209444  493081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:09:17.226241  493081 provision.go:86] duration metric: configureAuth took 347.251298ms
	I0325 02:09:17.226275  493081 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:09:17.226492  493081 config.go:176] Loaded profile config "embed-certs-20220325020743-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:09:17.226508  493081 machine.go:91] provisioned docker machine in 3.673835506s
	I0325 02:09:17.226517  493081 start.go:302] post-start starting for "embed-certs-20220325020743-262786" (driver="docker")
	I0325 02:09:17.226529  493081 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:09:17.226574  493081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:09:17.226610  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.260751  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.346515  493081 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:09:17.349401  493081 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:09:17.349426  493081 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:09:17.349434  493081 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:09:17.349441  493081 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:09:17.349451  493081 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:09:17.349501  493081 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:09:17.349565  493081 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:09:17.349641  493081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:09:17.356382  493081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:09:17.373849  493081 start.go:305] post-start completed in 147.30826ms
	I0325 02:09:17.373947  493081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:09:17.374006  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.409354  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.491707  493081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:09:17.495635  493081 fix.go:57] fixHost completed within 4.452841532s
	I0325 02:09:17.495667  493081 start.go:81] releasing machines lock for "embed-certs-20220325020743-262786", held for 4.452901221s
	I0325 02:09:17.495771  493081 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:09:17.529588  493081 ssh_runner.go:195] Run: systemctl --version
	I0325 02:09:17.529640  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.529677  493081 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:09:17.529740  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.564482  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.564781  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.664739  493081 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:09:17.675874  493081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:09:17.685037  493081 docker.go:183] disabling docker service ...
	I0325 02:09:17.685079  493081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:09:17.693838  493081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:09:17.702555  493081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:09:17.776644  493081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:09:17.856986  493081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:09:17.866069  493081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:09:17.878664  493081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:09:17.892023  493081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:09:17.898285  493081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:09:17.904702  493081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:09:17.978826  493081 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:09:18.053589  493081 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:09:18.053661  493081 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:09:18.057362  493081 start.go:462] Will wait 60s for crictl version
	I0325 02:09:18.057421  493081 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:09:18.082292  493081 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:09:18Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:09:29.129907  493081 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:09:29.157181  493081 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:09:29.157257  493081 ssh_runner.go:195] Run: containerd --version
	I0325 02:09:29.181632  493081 ssh_runner.go:195] Run: containerd --version
	I0325 02:09:29.206580  493081 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:09:29.206673  493081 cli_runner.go:133] Run: docker network inspect embed-certs-20220325020743-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:09:29.242155  493081 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0325 02:09:29.245424  493081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9d536416454c9       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   0b7c839dde6fb
	f84fedf62f62a       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   8329903e5a1d1
	2a8a16a4c5ab0       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   6257dca791a92
	0dcaa5ddf16d7       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   4f6ca772f8d74
	0f2defa775551       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   64b5b98ae89a8
	1366a173f44ad       b2756210eeabf       12 minutes ago      Running             etcd                      0                   f07b14711b6c4
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 01:56:43 UTC, end at Fri 2022-03-25 02:09:30 UTC. --
	Mar 25 02:02:47 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:47.860889118Z" level=warning msg="cleaning up after shim disconnected" id=079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3 namespace=k8s.io
	Mar 25 02:02:47 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:47.860913039Z" level=info msg="cleaning up dead shim"
	Mar 25 02:02:47 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:47.872166437Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:02:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191\n"
	Mar 25 02:02:48 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:48.207913986Z" level=info msg="RemoveContainer for \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\""
	Mar 25 02:02:48 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:48.213786829Z" level=info msg="RemoveContainer for \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\" returns successfully"
	Mar 25 02:03:00 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:03:00.724454941Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:03:00 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:03:00.740978222Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\""
	Mar 25 02:03:00 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:03:00.741512336Z" level=info msg="StartContainer for \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\""
	Mar 25 02:03:00 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:03:00.889357951Z" level=info msg="StartContainer for \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\" returns successfully"
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.131613873Z" level=info msg="shim disconnected" id=7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.131705468Z" level=warning msg="cleaning up after shim disconnected" id=7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50 namespace=k8s.io
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.131719145Z" level=info msg="cleaning up dead shim"
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.142981774Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:05:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4879\n"
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.428795047Z" level=info msg="RemoveContainer for \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\""
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.434856726Z" level=info msg="RemoveContainer for \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\" returns successfully"
	Mar 25 02:06:08 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:06:08.723637539Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:06:08 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:06:08.737240719Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c\""
	Mar 25 02:06:08 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:06:08.737789905Z" level=info msg="StartContainer for \"9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c\""
	Mar 25 02:06:08 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:06:08.888989601Z" level=info msg="StartContainer for \"9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c\" returns successfully"
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.148519938Z" level=info msg="shim disconnected" id=9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.148584986Z" level=warning msg="cleaning up after shim disconnected" id=9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c namespace=k8s.io
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.148625839Z" level=info msg="cleaning up dead shim"
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.159560133Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:08:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5615\n"
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.670683695Z" level=info msg="RemoveContainer for \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\""
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.676264970Z" level=info msg="RemoveContainer for \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220325015306-262786
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220325015306-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=old-k8s-version-20220325015306-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T01_57_11_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 01:57:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:09:07 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:09:07 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:09:07 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:09:07 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220325015306-262786
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                586019ba-8c2c-445d-9550-f545f1f4ef4d
	 Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	 Kernel Version:             5.13.0-1021-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220325015306-262786                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-rx7hj                                                    100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-20220325015306-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-20220325015306-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-wxllf                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-20220325015306-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                               Message
	  ----    ------                   ----               ----                                               -------
	  Normal  NodeAllocatableEnforced  12m                kubelet, old-k8s-version-20220325015306-262786     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-20220325015306-262786  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +1.027929] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[Mar25 02:08] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.029280] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019935] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +2.947849] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023822] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019966] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +2.955831] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.015863] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023925] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +4.012896] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf379e9f0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 46 c3 2c 62 64 ba 08 06
	[  +2.492008] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethe9bd593f
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 54 6e 3f 19 d8 08 06
	
	* 
	* ==> etcd [1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa] <==
	* 2022-03-25 01:57:01.803372 W | auth: simple token is not cryptographically signed
	2022-03-25 01:57:01.806268 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-03-25 01:57:01.807413 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-03-25 01:57:01.807883 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2022-03-25 01:57:01.808954 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-03-25 01:57:01.809140 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-03-25 01:57:01.809206 I | embed: listening for metrics on http://192.168.76.2:2381
	2022-03-25 01:57:02.596023 I | raft: ea7e25599daad906 is starting a new election at term 1
	2022-03-25 01:57:02.596060 I | raft: ea7e25599daad906 became candidate at term 2
	2022-03-25 01:57:02.596077 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2022-03-25 01:57:02.596090 I | raft: ea7e25599daad906 became leader at term 2
	2022-03-25 01:57:02.596097 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2022-03-25 01:57:02.596295 I | etcdserver: setting up the initial cluster version to 3.3
	2022-03-25 01:57:02.597359 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-03-25 01:57:02.597406 I | etcdserver/api: enabled capabilities for version 3.3
	2022-03-25 01:57:02.597440 I | etcdserver: published {Name:old-k8s-version-20220325015306-262786 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2022-03-25 01:57:02.597617 I | embed: ready to serve client requests
	2022-03-25 01:57:02.597747 I | embed: ready to serve client requests
	2022-03-25 01:57:02.600650 I | embed: serving client requests on 192.168.76.2:2379
	2022-03-25 01:57:02.601990 I | embed: serving client requests on 127.0.0.1:2379
	2022-03-25 02:03:04.607039 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:799" took too long (118.700488ms) to execute
	2022-03-25 02:03:07.917572 W | etcdserver: read-only range request "key:\"/registry/storageclasses\" range_end:\"/registry/storageclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (207.909341ms) to execute
	2022-03-25 02:04:06.057632 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (131.609091ms) to execute
	2022-03-25 02:07:02.631481 I | mvcc: store.index: compact 479
	2022-03-25 02:07:02.632292 I | mvcc: finished scheduled compaction at 479 (took 465.98µs)
	
	* 
	* ==> kernel <==
	*  02:09:30 up  4:47,  0 users,  load average: 1.61, 1.28, 1.60
	Linux old-k8s-version-20220325015306-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0] <==
	* I0325 01:57:05.741087       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0325 01:57:05.742225       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.76.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0325 01:57:05.747229       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
	I0325 01:57:05.747261       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0325 01:57:05.883908       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 01:57:05.883932       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 01:57:05.884126       1 cache.go:39] Caches are synced for autoregister controller
	I0325 01:57:05.884201       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0325 01:57:06.739679       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0325 01:57:06.739704       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 01:57:06.739717       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 01:57:06.743177       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0325 01:57:06.747597       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0325 01:57:06.747620       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0325 01:57:07.493498       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 01:57:08.520754       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 01:57:08.800880       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0325 01:57:09.114170       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0325 01:57:09.114813       1 controller.go:606] quota admission added evaluator for: endpoints
	I0325 01:57:09.966541       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0325 01:57:10.500104       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0325 01:57:10.871924       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0325 01:57:25.143684       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0325 01:57:25.153906       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0325 01:57:25.619240       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95] <==
	* I0325 01:57:25.519444       1 shared_informer.go:204] Caches are synced for disruption 
	I0325 01:57:25.519471       1 disruption.go:341] Sending events to api server.
	I0325 01:57:25.564979       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0325 01:57:25.567532       1 shared_informer.go:204] Caches are synced for node 
	I0325 01:57:25.567556       1 range_allocator.go:172] Starting range CIDR allocator
	I0325 01:57:25.567570       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
	I0325 01:57:25.569098       1 shared_informer.go:204] Caches are synced for HPA 
	I0325 01:57:25.569516       1 shared_informer.go:204] Caches are synced for TTL 
	I0325 01:57:25.615069       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0325 01:57:25.619293       1 shared_informer.go:204] Caches are synced for taint 
	I0325 01:57:25.619399       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: 
	W0325 01:57:25.619533       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-20220325015306-262786. Assuming now as a timestamp.
	I0325 01:57:25.619601       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0325 01:57:25.619813       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0325 01:57:25.619960       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20220325015306-262786", UID:"f6951a5c-6edc-46f8-beec-3c90a8b9581c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20220325015306-262786 event: Registered Node old-k8s-version-20220325015306-262786 in Controller
	I0325 01:57:25.627002       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"9ebcce20-95c8-46a7-994a-18f1bc7bd92e", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rx7hj
	I0325 01:57:25.629138       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"6d02422f-16d5-4e4d-a5bf-93392a263b1e", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-wxllf
	E0325 01:57:25.636892       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"6d02422f-16d5-4e4d-a5bf-93392a263b1e", ResourceVersion:"221", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63783770230, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b60f60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001c3a040), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b60f80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b60fa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001b60fe0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001685180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016811f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001643860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002ceee8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001681238)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0325 01:57:25.667564       1 shared_informer.go:204] Caches are synced for resource quota 
	I0325 01:57:25.667705       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0325 01:57:25.669937       1 shared_informer.go:204] Caches are synced for resource quota 
	I0325 01:57:25.670463       1 range_allocator.go:359] Set node old-k8s-version-20220325015306-262786 PodCIDR to [10.244.0.0/24]
	I0325 01:57:25.679094       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0325 01:57:25.722642       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0325 01:57:25.722667       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e] <==
	* W0325 01:57:26.609517       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0325 01:57:26.688448       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0325 01:57:26.688492       1 server_others.go:149] Using iptables Proxier.
	I0325 01:57:26.688881       1 server.go:529] Version: v1.16.0
	I0325 01:57:26.690169       1 config.go:131] Starting endpoints config controller
	I0325 01:57:26.690202       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0325 01:57:26.690377       1 config.go:313] Starting service config controller
	I0325 01:57:26.690393       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0325 01:57:26.790460       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0325 01:57:26.790538       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a] <==
	* I0325 01:57:05.800810       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0325 01:57:05.892456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 01:57:05.892758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 01:57:05.892875       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 01:57:05.892975       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 01:57:05.893150       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:05.893319       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:05.893573       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 01:57:05.894058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 01:57:05.894470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 01:57:05.894601       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 01:57:05.894681       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 01:57:06.894818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 01:57:06.895872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 01:57:06.897095       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 01:57:06.898221       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 01:57:06.899310       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:06.900400       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:06.901503       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 01:57:06.902607       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 01:57:06.903724       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 01:57:06.904742       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 01:57:06.905998       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 01:57:25.156410       1 factory.go:585] pod is already present in the activeQ
	E0325 01:57:25.162943       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 01:56:43 UTC, end at Fri 2022-03-25 02:09:30 UTC. --
	Mar 25 02:07:45 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:07:45.974743    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:07:50 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:07:50.975563    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:07:55 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:07:55.976355    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:00 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:00.977135    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:05 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:05.977902    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:10 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:10.978673    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:15 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:15.979410    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:20 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:20.980158    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:25.980913    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:30 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:30.981662    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:35 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:35.982358    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:40 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:40.983271    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:45 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:45.983975    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:49.671022    1069 pod_workers.go:191] Error syncing pod bf35a126-09fa-4db9-9aa4-2cb811bf4595 ("kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"
	Mar 25 02:08:50 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:50.984630    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:55 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:55.985344    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:00 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:00.986105    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:02 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:02.721374    1069 pod_workers.go:191] Error syncing pod bf35a126-09fa-4db9-9aa4-2cb811bf4595 ("kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"
	Mar 25 02:09:05 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:05.986848    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:10 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:10.987638    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:13 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:13.721480    1069 pod_workers.go:191] Error syncing pod bf35a126-09fa-4db9-9aa4-2cb811bf4595 ("kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"
	Mar 25 02:09:15 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:15.988296    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:20 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:20.989039    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:25.721214    1069 pod_workers.go:191] Error syncing pod bf35a126-09fa-4db9-9aa4-2cb811bf4595 ("kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"
	Mar 25 02:09:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:25.989728    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-5644d7b6d9-trm4j storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 describe pod busybox coredns-5644d7b6d9-trm4j storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220325015306-262786 describe pod busybox coredns-5644d7b6d9-trm4j storage-provisioner: exit status 1 (61.047869ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ltrfn (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-ltrfn:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-ltrfn
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m2s                   default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  5m25s (x1 over 6m55s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-trm4j" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220325015306-262786 describe pod busybox coredns-5644d7b6d9-trm4j storage-provisioner: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220325015306-262786
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220325015306-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b",
	        "Created": "2022-03-25T01:56:43.297059247Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 457693,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T01:56:43.655669688Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hosts",
	        "LogPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b-json.log",
	        "Name": "/old-k8s-version-20220325015306-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220325015306-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220325015306-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220325015306-262786",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220325015306-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220325015306-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44b9519d0b55a0dbe9bc349c627da03ca1d456aab29fe1f9cc6fbe902a60b4e0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49539"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49535"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49537"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49536"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44b9519d0b55",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220325015306-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e6a4c0e8f4c7",
	                        "old-k8s-version-20220325015306-262786"
	                    ],
	                    "NetworkID": "739cf1dc095b5d758dfcb21f6f999d4a170c6b33046de4a26204586f05d2d4a4",
	                    "EndpointID": "f17636c1e1855543cb0356e0ced5eac0102a5fed579cb886a1c3e850498bc7d7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220325015306-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | running-upgrade-20220325014921-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:11 UTC | Fri, 25 Mar 2022 01:54:22 UTC |
	|         | running-upgrade-20220325014921-262786             |                                          |         |         |                               |                               |
	| start   | -p                                                | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:09 UTC | Fri, 25 Mar 2022 01:54:40 UTC |
	|         | cilium-20220325014921-262786                      |                                          |         |         |                               |                               |
	|         | --memory=2048                                     |                                          |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                          |         |         |                               |                               |
	|         | --cni=cilium --driver=docker                      |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| ssh     | -p                                                | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:45 UTC | Fri, 25 Mar 2022 01:54:45 UTC |
	|         | cilium-20220325014921-262786                      |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                          |         |         |                               |                               |
	| delete  | -p                                                | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:57 UTC | Fri, 25 Mar 2022 01:55:00 UTC |
	|         | cilium-20220325014921-262786                      |                                          |         |         |                               |                               |
	| start   | -p                                                | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:55:00 UTC | Fri, 25 Mar 2022 01:56:12 UTC |
	|         | kindnet-20220325014920-262786                     |                                          |         |         |                               |                               |
	|         | --memory=2048                                     |                                          |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                          |         |         |                               |                               |
	|         | --cni=kindnet --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| ssh     | -p                                                | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:56:17 UTC | Fri, 25 Mar 2022 01:56:17 UTC |
	|         | kindnet-20220325014920-262786                     |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                          |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786             | old-k8s-version-20220325015306-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:26 UTC | Fri, 25 Mar 2022 02:01:27 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| -p      | kindnet-20220325014920-262786                     | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:33 UTC | Fri, 25 Mar 2022 02:01:34 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| delete  | -p                                                | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:34 UTC | Fri, 25 Mar 2022 02:01:37 UTC |
	|         | kindnet-20220325014920-262786                     |                                          |         |         |                               |                               |
	| start   | -p                                                | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:37 UTC | Fri, 25 Mar 2022 02:02:36 UTC |
	|         | enable-default-cni-20220325014920-262786          |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                          |         |         |                               |                               |
	|         | --enable-default-cni=true                         |                                          |         |         |                               |                               |
	|         | --driver=docker                                   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| ssh     | -p                                                | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:37 UTC | Fri, 25 Mar 2022 02:02:37 UTC |
	|         | enable-default-cni-20220325014920-262786          |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                          |         |         |                               |                               |
	| -p      | calico-20220325014921-262786                      | calico-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:55 UTC | Fri, 25 Mar 2022 02:02:55 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| delete  | -p                                                | calico-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:56 UTC | Fri, 25 Mar 2022 02:02:59 UTC |
	|         | calico-20220325014921-262786                      |                                          |         |         |                               |                               |
	| -p      | custom-weave-20220325014921-262786                | custom-weave-20220325014921-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:22 UTC | Fri, 25 Mar 2022 02:03:23 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| delete  | -p                                                | custom-weave-20220325014921-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:24 UTC | Fri, 25 Mar 2022 02:03:26 UTC |
	|         | custom-weave-20220325014921-262786                |                                          |         |         |                               |                               |
	| start   | -p                                                | bridge-20220325014920-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:59 UTC | Fri, 25 Mar 2022 02:03:56 UTC |
	|         | bridge-20220325014920-262786                      |                                          |         |         |                               |                               |
	|         | --memory=2048                                     |                                          |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                          |         |         |                               |                               |
	|         | --cni=bridge --driver=docker                      |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	| ssh     | -p                                                | bridge-20220325014920-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:57 UTC | Fri, 25 Mar 2022 02:03:57 UTC |
	|         | bridge-20220325014920-262786                      |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                          |         |         |                               |                               |
	| -p      | enable-default-cni-20220325014920-262786          | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:38 UTC | Fri, 25 Mar 2022 02:07:39 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| delete  | -p                                                | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:40 UTC | Fri, 25 Mar 2022 02:07:43 UTC |
	|         | enable-default-cni-20220325014920-262786          |                                          |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                  | no-preload-20220325020326-262786         | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:18 UTC | Fri, 25 Mar 2022 02:08:19 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20220325020743-262786        | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:43 UTC | Fri, 25 Mar 2022 02:08:42 UTC |
	|         | embed-certs-20220325020743-262786                 |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                          |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                          |         |         |                               |                               |
	|         | --driver=docker                                   |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                      |                                          |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20220325020743-262786        | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:51 UTC | Fri, 25 Mar 2022 02:08:52 UTC |
	|         | embed-certs-20220325020743-262786                 |                                          |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                          |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                          |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20220325020743-262786        | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:52 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                 |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                          |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20220325020743-262786        | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                 |                                          |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                          |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786             | old-k8s-version-20220325015306-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:29 UTC | Fri, 25 Mar 2022 02:09:30 UTC |
	|         | logs -n 25                                        |                                          |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:09:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:09:12.731493  493081 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:09:12.731685  493081 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:12.731712  493081 out.go:310] Setting ErrFile to fd 2...
	I0325 02:09:12.731719  493081 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:12.731861  493081 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:09:12.732145  493081 out.go:304] Setting JSON to false
	I0325 02:09:12.733394  493081 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17225,"bootTime":1648156928,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:09:12.733483  493081 start.go:125] virtualization: kvm guest
	I0325 02:09:12.735978  493081 out.go:176] * [embed-certs-20220325020743-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:09:12.738166  493081 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:09:12.736167  493081 notify.go:193] Checking for updates...
	I0325 02:09:12.739947  493081 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:09:12.741737  493081 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:09:12.743594  493081 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:09:12.745629  493081 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:09:12.746853  493081 config.go:176] Loaded profile config "embed-certs-20220325020743-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:09:12.747516  493081 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:09:12.791102  493081 docker.go:136] docker version: linux-20.10.14
	I0325 02:09:12.791253  493081 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:12.893308  493081 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:12.82508639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:09:12.893455  493081 docker.go:253] overlay module found
	I0325 02:09:12.897256  493081 out.go:176] * Using the docker driver based on existing profile
	I0325 02:09:12.897291  493081 start.go:284] selected driver: docker
	I0325 02:09:12.897298  493081 start.go:801] validating driver "docker" against &{Name:embed-certs-20220325020743-262786 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:embed-certs-20220325020743-262786 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_p
ods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:12.897394  493081 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:09:12.897430  493081 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:09:12.897453  493081 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:09:12.899073  493081 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:09:12.899673  493081 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:12.995845  493081 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:12.930743245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:09:12.996002  493081 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:09:12.996024  493081 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:09:12.998609  493081 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:09:12.998721  493081 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:09:12.998746  493081 cni.go:93] Creating CNI manager for ""
	I0325 02:09:12.998758  493081 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:09:12.998772  493081 start_flags.go:304] config:
	{Name:embed-certs-20220325020743-262786 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:embed-certs-20220325020743-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:13.001763  493081 out.go:176] * Starting control plane node embed-certs-20220325020743-262786 in cluster embed-certs-20220325020743-262786
	I0325 02:09:13.001805  493081 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:09:13.003578  493081 out.go:176] * Pulling base image ...
	I0325 02:09:13.003622  493081 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:09:13.003658  493081 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:09:13.003677  493081 cache.go:57] Caching tarball of preloaded images
	I0325 02:09:13.003753  493081 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:09:13.003920  493081 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:09:13.003937  493081 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:09:13.004068  493081 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/config.json ...
	I0325 02:09:13.042489  493081 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:09:13.042529  493081 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:09:13.042548  493081 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:09:13.042621  493081 start.go:348] acquiring machines lock for embed-certs-20220325020743-262786: {Name:mk09b5bda74ca4ab49b97f5fa7fb6add6f27caec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:09:13.042750  493081 start.go:352] acquired machines lock for "embed-certs-20220325020743-262786" in 104.726µs
	I0325 02:09:13.042776  493081 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:09:13.042786  493081 fix.go:55] fixHost starting: 
	I0325 02:09:13.043087  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:09:13.078477  493081 fix.go:108] recreateIfNeeded on embed-certs-20220325020743-262786: state=Stopped err=<nil>
	W0325 02:09:13.078516  493081 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:09:13.082309  493081 out.go:176] * Restarting existing docker container for "embed-certs-20220325020743-262786" ...
	I0325 02:09:13.082389  493081 cli_runner.go:133] Run: docker start embed-certs-20220325020743-262786
	I0325 02:09:13.478445  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:09:13.515869  493081 kic.go:420] container "embed-certs-20220325020743-262786" state is running.
	I0325 02:09:13.516313  493081 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:09:13.552451  493081 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/config.json ...
	I0325 02:09:13.552659  493081 machine.go:88] provisioning docker machine ...
	I0325 02:09:13.552691  493081 ubuntu.go:169] provisioning hostname "embed-certs-20220325020743-262786"
	I0325 02:09:13.552750  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:13.588753  493081 main.go:130] libmachine: Using SSH client type: native
	I0325 02:09:13.588959  493081 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49564 <nil> <nil>}
	I0325 02:09:13.588977  493081 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220325020743-262786 && echo "embed-certs-20220325020743-262786" | sudo tee /etc/hostname
	I0325 02:09:13.589627  493081 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46458->127.0.0.1:49564: read: connection reset by peer
	I0325 02:09:16.724873  493081 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20220325020743-262786
	
	I0325 02:09:16.724982  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:16.760271  493081 main.go:130] libmachine: Using SSH client type: native
	I0325 02:09:16.760424  493081 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49564 <nil> <nil>}
	I0325 02:09:16.760449  493081 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220325020743-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220325020743-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220325020743-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:09:16.878846  493081 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:09:16.878884  493081 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:09:16.878934  493081 ubuntu.go:177] setting up certificates
	I0325 02:09:16.878971  493081 provision.go:83] configureAuth start
	I0325 02:09:16.879049  493081 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:09:16.913630  493081 provision.go:138] copyHostCerts
	I0325 02:09:16.913708  493081 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:09:16.913724  493081 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:09:16.913805  493081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:09:16.913931  493081 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:09:16.913949  493081 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:09:16.913985  493081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:09:16.914069  493081 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:09:16.914079  493081 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:09:16.914110  493081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:09:16.914200  493081 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220325020743-262786 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220325020743-262786]
	I0325 02:09:17.051577  493081 provision.go:172] copyRemoteCerts
	I0325 02:09:17.051652  493081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:09:17.051694  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.087385  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.174269  493081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:09:17.191695  493081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0325 02:09:17.209444  493081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:09:17.226241  493081 provision.go:86] duration metric: configureAuth took 347.251298ms
	I0325 02:09:17.226275  493081 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:09:17.226492  493081 config.go:176] Loaded profile config "embed-certs-20220325020743-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:09:17.226508  493081 machine.go:91] provisioned docker machine in 3.673835506s
	I0325 02:09:17.226517  493081 start.go:302] post-start starting for "embed-certs-20220325020743-262786" (driver="docker")
	I0325 02:09:17.226529  493081 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:09:17.226574  493081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:09:17.226610  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.260751  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.346515  493081 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:09:17.349401  493081 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:09:17.349426  493081 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:09:17.349434  493081 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:09:17.349441  493081 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:09:17.349451  493081 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:09:17.349501  493081 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:09:17.349565  493081 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:09:17.349641  493081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:09:17.356382  493081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:09:17.373849  493081 start.go:305] post-start completed in 147.30826ms
	I0325 02:09:17.373947  493081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:09:17.374006  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.409354  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.491707  493081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:09:17.495635  493081 fix.go:57] fixHost completed within 4.452841532s
	I0325 02:09:17.495667  493081 start.go:81] releasing machines lock for "embed-certs-20220325020743-262786", held for 4.452901221s
	I0325 02:09:17.495771  493081 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:09:17.529588  493081 ssh_runner.go:195] Run: systemctl --version
	I0325 02:09:17.529640  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.529677  493081 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:09:17.529740  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:09:17.564482  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.564781  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:09:17.664739  493081 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:09:17.675874  493081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:09:17.685037  493081 docker.go:183] disabling docker service ...
	I0325 02:09:17.685079  493081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:09:17.693838  493081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:09:17.702555  493081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:09:17.776644  493081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:09:17.856986  493081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:09:17.866069  493081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:09:17.878664  493081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:09:17.892023  493081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:09:17.898285  493081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:09:17.904702  493081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:09:17.978826  493081 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:09:18.053589  493081 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:09:18.053661  493081 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:09:18.057362  493081 start.go:462] Will wait 60s for crictl version
	I0325 02:09:18.057421  493081 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:09:18.082292  493081 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:09:18Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:09:29.129907  493081 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:09:29.157181  493081 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:09:29.157257  493081 ssh_runner.go:195] Run: containerd --version
	I0325 02:09:29.181632  493081 ssh_runner.go:195] Run: containerd --version
	I0325 02:09:29.206580  493081 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:09:29.206673  493081 cli_runner.go:133] Run: docker network inspect embed-certs-20220325020743-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:09:29.242155  493081 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0325 02:09:29.245424  493081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9d536416454c9       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   0b7c839dde6fb
	f84fedf62f62a       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   8329903e5a1d1
	2a8a16a4c5ab0       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   6257dca791a92
	0dcaa5ddf16d7       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   4f6ca772f8d74
	0f2defa775551       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   64b5b98ae89a8
	1366a173f44ad       b2756210eeabf       12 minutes ago      Running             etcd                      0                   f07b14711b6c4
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 01:56:43 UTC, end at Fri 2022-03-25 02:09:32 UTC. --
	Mar 25 02:02:47 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:47.860889118Z" level=warning msg="cleaning up after shim disconnected" id=079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3 namespace=k8s.io
	Mar 25 02:02:47 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:47.860913039Z" level=info msg="cleaning up dead shim"
	Mar 25 02:02:47 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:47.872166437Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:02:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191\n"
	Mar 25 02:02:48 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:48.207913986Z" level=info msg="RemoveContainer for \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\""
	Mar 25 02:02:48 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:02:48.213786829Z" level=info msg="RemoveContainer for \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\" returns successfully"
	Mar 25 02:03:00 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:03:00.724454941Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:03:00 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:03:00.740978222Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\""
	Mar 25 02:03:00 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:03:00.741512336Z" level=info msg="StartContainer for \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\""
	Mar 25 02:03:00 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:03:00.889357951Z" level=info msg="StartContainer for \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\" returns successfully"
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.131613873Z" level=info msg="shim disconnected" id=7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.131705468Z" level=warning msg="cleaning up after shim disconnected" id=7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50 namespace=k8s.io
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.131719145Z" level=info msg="cleaning up dead shim"
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.142981774Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:05:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4879\n"
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.428795047Z" level=info msg="RemoveContainer for \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\""
	Mar 25 02:05:41 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:05:41.434856726Z" level=info msg="RemoveContainer for \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\" returns successfully"
	Mar 25 02:06:08 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:06:08.723637539Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:06:08 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:06:08.737240719Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c\""
	Mar 25 02:06:08 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:06:08.737789905Z" level=info msg="StartContainer for \"9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c\""
	Mar 25 02:06:08 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:06:08.888989601Z" level=info msg="StartContainer for \"9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c\" returns successfully"
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.148519938Z" level=info msg="shim disconnected" id=9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.148584986Z" level=warning msg="cleaning up after shim disconnected" id=9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c namespace=k8s.io
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.148625839Z" level=info msg="cleaning up dead shim"
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.159560133Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:08:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5615\n"
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.670683695Z" level=info msg="RemoveContainer for \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\""
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:08:49.676264970Z" level=info msg="RemoveContainer for \"7e2cc5eb32935c5e2a1f1543fa69420edcdf5ce590686a5deb8fccbd8161ee50\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220325015306-262786
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220325015306-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=old-k8s-version-20220325015306-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T01_57_11_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 01:57:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:09:07 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:09:07 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:09:07 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:09:07 +0000   Fri, 25 Mar 2022 01:57:02 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220325015306-262786
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                586019ba-8c2c-445d-9550-f545f1f4ef4d
	 Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	 Kernel Version:             5.13.0-1021-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220325015306-262786                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-rx7hj                                                    100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-20220325015306-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-20220325015306-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-wxllf                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-20220325015306-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                               Message
	  ----    ------                   ----               ----                                               -------
	  Normal  NodeAllocatableEnforced  12m                kubelet, old-k8s-version-20220325015306-262786     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-20220325015306-262786  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +1.027929] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[Mar25 02:08] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.029280] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019935] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +2.947849] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023822] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019966] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +2.955831] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.015863] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023925] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +4.012896] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf379e9f0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 46 c3 2c 62 64 ba 08 06
	[  +2.492008] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethe9bd593f
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 54 6e 3f 19 d8 08 06
	
	* 
	* ==> etcd [1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa] <==
	* 2022-03-25 01:57:01.803372 W | auth: simple token is not cryptographically signed
	2022-03-25 01:57:01.806268 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-03-25 01:57:01.807413 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-03-25 01:57:01.807883 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2022-03-25 01:57:01.808954 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-03-25 01:57:01.809140 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-03-25 01:57:01.809206 I | embed: listening for metrics on http://192.168.76.2:2381
	2022-03-25 01:57:02.596023 I | raft: ea7e25599daad906 is starting a new election at term 1
	2022-03-25 01:57:02.596060 I | raft: ea7e25599daad906 became candidate at term 2
	2022-03-25 01:57:02.596077 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2022-03-25 01:57:02.596090 I | raft: ea7e25599daad906 became leader at term 2
	2022-03-25 01:57:02.596097 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2022-03-25 01:57:02.596295 I | etcdserver: setting up the initial cluster version to 3.3
	2022-03-25 01:57:02.597359 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-03-25 01:57:02.597406 I | etcdserver/api: enabled capabilities for version 3.3
	2022-03-25 01:57:02.597440 I | etcdserver: published {Name:old-k8s-version-20220325015306-262786 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2022-03-25 01:57:02.597617 I | embed: ready to serve client requests
	2022-03-25 01:57:02.597747 I | embed: ready to serve client requests
	2022-03-25 01:57:02.600650 I | embed: serving client requests on 192.168.76.2:2379
	2022-03-25 01:57:02.601990 I | embed: serving client requests on 127.0.0.1:2379
	2022-03-25 02:03:04.607039 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:799" took too long (118.700488ms) to execute
	2022-03-25 02:03:07.917572 W | etcdserver: read-only range request "key:\"/registry/storageclasses\" range_end:\"/registry/storageclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (207.909341ms) to execute
	2022-03-25 02:04:06.057632 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (131.609091ms) to execute
	2022-03-25 02:07:02.631481 I | mvcc: store.index: compact 479
	2022-03-25 02:07:02.632292 I | mvcc: finished scheduled compaction at 479 (took 465.98µs)
	
	* 
	* ==> kernel <==
	*  02:09:32 up  4:47,  0 users,  load average: 1.61, 1.28, 1.60
	Linux old-k8s-version-20220325015306-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0] <==
	* I0325 01:57:05.741087       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0325 01:57:05.742225       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.76.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0325 01:57:05.747229       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
	I0325 01:57:05.747261       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0325 01:57:05.883908       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 01:57:05.883932       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 01:57:05.884126       1 cache.go:39] Caches are synced for autoregister controller
	I0325 01:57:05.884201       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0325 01:57:06.739679       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0325 01:57:06.739704       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 01:57:06.739717       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 01:57:06.743177       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0325 01:57:06.747597       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0325 01:57:06.747620       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0325 01:57:07.493498       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 01:57:08.520754       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 01:57:08.800880       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0325 01:57:09.114170       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0325 01:57:09.114813       1 controller.go:606] quota admission added evaluator for: endpoints
	I0325 01:57:09.966541       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0325 01:57:10.500104       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0325 01:57:10.871924       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0325 01:57:25.143684       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0325 01:57:25.153906       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0325 01:57:25.619240       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95] <==
	* I0325 01:57:25.519444       1 shared_informer.go:204] Caches are synced for disruption 
	I0325 01:57:25.519471       1 disruption.go:341] Sending events to api server.
	I0325 01:57:25.564979       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0325 01:57:25.567532       1 shared_informer.go:204] Caches are synced for node 
	I0325 01:57:25.567556       1 range_allocator.go:172] Starting range CIDR allocator
	I0325 01:57:25.567570       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
	I0325 01:57:25.569098       1 shared_informer.go:204] Caches are synced for HPA 
	I0325 01:57:25.569516       1 shared_informer.go:204] Caches are synced for TTL 
	I0325 01:57:25.615069       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0325 01:57:25.619293       1 shared_informer.go:204] Caches are synced for taint 
	I0325 01:57:25.619399       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: 
	W0325 01:57:25.619533       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-20220325015306-262786. Assuming now as a timestamp.
	I0325 01:57:25.619601       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0325 01:57:25.619813       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0325 01:57:25.619960       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20220325015306-262786", UID:"f6951a5c-6edc-46f8-beec-3c90a8b9581c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20220325015306-262786 event: Registered Node old-k8s-version-20220325015306-262786 in Controller
	I0325 01:57:25.627002       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"9ebcce20-95c8-46a7-994a-18f1bc7bd92e", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rx7hj
	I0325 01:57:25.629138       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"6d02422f-16d5-4e4d-a5bf-93392a263b1e", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-wxllf
	E0325 01:57:25.636892       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"6d02422f-16d5-4e4d-a5bf-93392a263b1e", ResourceVersion:"221", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63783770230, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b60f60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001c3a040), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b60f80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b60fa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001b60fe0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001685180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016811f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001643860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002ceee8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001681238)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0325 01:57:25.667564       1 shared_informer.go:204] Caches are synced for resource quota 
	I0325 01:57:25.667705       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0325 01:57:25.669937       1 shared_informer.go:204] Caches are synced for resource quota 
	I0325 01:57:25.670463       1 range_allocator.go:359] Set node old-k8s-version-20220325015306-262786 PodCIDR to [10.244.0.0/24]
	I0325 01:57:25.679094       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0325 01:57:25.722642       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0325 01:57:25.722667       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e] <==
	* W0325 01:57:26.609517       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0325 01:57:26.688448       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0325 01:57:26.688492       1 server_others.go:149] Using iptables Proxier.
	I0325 01:57:26.688881       1 server.go:529] Version: v1.16.0
	I0325 01:57:26.690169       1 config.go:131] Starting endpoints config controller
	I0325 01:57:26.690202       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0325 01:57:26.690377       1 config.go:313] Starting service config controller
	I0325 01:57:26.690393       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0325 01:57:26.790460       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0325 01:57:26.790538       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a] <==
	* I0325 01:57:05.800810       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0325 01:57:05.892456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 01:57:05.892758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 01:57:05.892875       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 01:57:05.892975       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 01:57:05.893150       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:05.893319       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:05.893573       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 01:57:05.894058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 01:57:05.894470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 01:57:05.894601       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 01:57:05.894681       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 01:57:06.894818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 01:57:06.895872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 01:57:06.897095       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 01:57:06.898221       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 01:57:06.899310       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:06.900400       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 01:57:06.901503       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 01:57:06.902607       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 01:57:06.903724       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 01:57:06.904742       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 01:57:06.905998       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 01:57:25.156410       1 factory.go:585] pod is already present in the activeQ
	E0325 01:57:25.162943       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 01:56:43 UTC, end at Fri 2022-03-25 02:09:32 UTC. --
	Mar 25 02:07:50 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:07:50.975563    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:07:55 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:07:55.976355    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:00 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:00.977135    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:05 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:05.977902    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:10 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:10.978673    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:15 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:15.979410    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:20 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:20.980158    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:25.980913    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:30 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:30.981662    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:35 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:35.982358    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:40 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:40.983271    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:45 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:45.983975    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:49 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:49.671022    1069 pod_workers.go:191] Error syncing pod bf35a126-09fa-4db9-9aa4-2cb811bf4595 ("kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"
	Mar 25 02:08:50 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:50.984630    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:08:55 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:08:55.985344    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:00 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:00.986105    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:02 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:02.721374    1069 pod_workers.go:191] Error syncing pod bf35a126-09fa-4db9-9aa4-2cb811bf4595 ("kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"
	Mar 25 02:09:05 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:05.986848    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:10 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:10.987638    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:13 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:13.721480    1069 pod_workers.go:191] Error syncing pod bf35a126-09fa-4db9-9aa4-2cb811bf4595 ("kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"
	Mar 25 02:09:15 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:15.988296    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:20 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:20.989039    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:25.721214    1069 pod_workers.go:191] Error syncing pod bf35a126-09fa-4db9-9aa4-2cb811bf4595 ("kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rx7hj_kube-system(bf35a126-09fa-4db9-9aa4-2cb811bf4595)"
	Mar 25 02:09:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:25.989728    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:09:30 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:09:30.990393    1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-5644d7b6d9-trm4j storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 describe pod busybox coredns-5644d7b6d9-trm4j storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220325015306-262786 describe pod busybox coredns-5644d7b6d9-trm4j storage-provisioner: exit status 1 (65.413445ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ltrfn (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-ltrfn:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-ltrfn
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m4s                   default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  5m27s (x1 over 6m57s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-trm4j" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220325015306-262786 describe pod busybox coredns-5644d7b6d9-trm4j storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (484.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (291.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147381054s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134533979s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123514751s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 02:03:47.791490  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147530259s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 02:03:56.093935  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140222077s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142487655s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134526265s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.262350499s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141473204s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0325 02:06:00.415751  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 02:06:05.105787  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 02:06:22.273486  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:32.514584  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144496861s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133502014s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (291.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (293.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220325020326-262786 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-20220325020326-262786 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0: exit status 80 (4m50.998468383s)

                                                
                                                
-- stdout --
	* [no-preload-20220325020326-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node no-preload-20220325020326-262786 in cluster no-preload-20220325020326-262786
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.4-rc.0 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 02:03:26.915961  479985 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:03:26.916095  479985 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:03:26.916104  479985 out.go:310] Setting ErrFile to fd 2...
	I0325 02:03:26.916110  479985 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:03:26.916238  479985 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:03:26.916547  479985 out.go:304] Setting JSON to false
	I0325 02:03:26.918055  479985 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16879,"bootTime":1648156928,"procs":554,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:03:26.918129  479985 start.go:125] virtualization: kvm guest
	I0325 02:03:26.920992  479985 out.go:176] * [no-preload-20220325020326-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:03:26.922630  479985 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:03:26.921202  479985 notify.go:193] Checking for updates...
	I0325 02:03:26.924047  479985 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:03:26.925520  479985 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:03:26.927098  479985 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:03:26.929219  479985 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:03:26.930444  479985 config.go:176] Loaded profile config "bridge-20220325014920-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:03:26.930569  479985 config.go:176] Loaded profile config "enable-default-cni-20220325014920-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:03:26.930682  479985 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 02:03:26.930765  479985 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:03:26.974302  479985 docker.go:136] docker version: linux-20.10.14
	I0325 02:03:26.974418  479985 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:03:27.068156  479985 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:03:27.0052053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:03:27.068278  479985 docker.go:253] overlay module found
	I0325 02:03:27.070732  479985 out.go:176] * Using the docker driver based on user configuration
	I0325 02:03:27.070761  479985 start.go:284] selected driver: docker
	I0325 02:03:27.070769  479985 start.go:801] validating driver "docker" against <nil>
	I0325 02:03:27.070787  479985 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:03:27.070831  479985 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:03:27.070854  479985 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 02:03:27.072259  479985 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:03:27.072910  479985 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:03:27.167142  479985 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:03:27.104293756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:03:27.167285  479985 start_flags.go:290] no existing cluster config was found, will generate one from the flags 
	I0325 02:03:27.167430  479985 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:03:27.167449  479985 cni.go:93] Creating CNI manager for ""
	I0325 02:03:27.167457  479985 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:03:27.167464  479985 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:03:27.167471  479985 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:03:27.167475  479985 start_flags.go:299] Found "CNI" CNI - setting NetworkPlugin=cni
	I0325 02:03:27.167484  479985 start_flags.go:304] config:
	{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:03:27.169929  479985 out.go:176] * Starting control plane node no-preload-20220325020326-262786 in cluster no-preload-20220325020326-262786
	I0325 02:03:27.169970  479985 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:03:27.171527  479985 out.go:176] * Pulling base image ...
	I0325 02:03:27.171560  479985 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:03:27.171628  479985 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:03:27.171702  479985 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:03:27.171755  479985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json: {Name:mkbd3088edab17c2ca70f4dca1383b65f779f9a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:03:27.171867  479985 cache.go:107] acquiring lock: {Name:mk0987b0339865c5416a6746bce8670ad78c0a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.171895  479985 cache.go:107] acquiring lock: {Name:mkadc5033eb4d9179acd1c6e7ff0e25d4981568c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.171937  479985 cache.go:107] acquiring lock: {Name:mkcb4c0577b6fb6a4cc15cd1cfc04742789dcc24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.171961  479985 cache.go:107] acquiring lock: {Name:mkdc6a82c5ad28a9b97463884b87944eaef2fef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.171976  479985 cache.go:107] acquiring lock: {Name:mk140b8e2c06d387b642b813a7efd82a9f19d6c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.171983  479985 cache.go:107] acquiring lock: {Name:mk61dd10aefdeb5283d07e3024688797852e36d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.171992  479985 cache.go:107] acquiring lock: {Name:mk1134717661547774a1dd6d6e2854162646543d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.172051  479985 cache.go:107] acquiring lock: {Name:mkd382d09a068cdb98cdc085f7d3d174faef8f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.172087  479985 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0325 02:03:27.172095  479985 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.23.4-rc.0
	I0325 02:03:27.172103  479985 cache.go:107] acquiring lock: {Name:mk8ed79f1ecf0bc83b0d3ead06534032f65db356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.172113  479985 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 132.103µs
	I0325 02:03:27.172133  479985 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0325 02:03:27.172146  479985 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0
	I0325 02:03:27.172149  479985 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0325 02:03:27.172162  479985 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0325 02:03:27.172151  479985 cache.go:107] acquiring lock: {Name:mkcf6d57389d13d4e31240b1cdf9af5455cf82f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.172168  479985 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 321.6µs
	I0325 02:03:27.172179  479985 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0325 02:03:27.172179  479985 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 80.092µs
	I0325 02:03:27.172186  479985 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.23.4-rc.0
	I0325 02:03:27.172193  479985 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0325 02:03:27.172201  479985 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0325 02:03:27.172209  479985 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0325 02:03:27.172219  479985 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 332.126µs
	I0325 02:03:27.172229  479985 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0325 02:03:27.172224  479985 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 76.673µs
	I0325 02:03:27.172237  479985 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0325 02:03:27.172234  479985 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0325 02:03:27.172257  479985 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 329.017µs
	I0325 02:03:27.172267  479985 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0325 02:03:27.172298  479985 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.23.4-rc.0
	I0325 02:03:27.173219  479985 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.23.4-rc.0: Error response from daemon: reference does not exist
	I0325 02:03:27.173232  479985 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.23.4-rc.0: Error response from daemon: reference does not exist
	I0325 02:03:27.173263  479985 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.23.4-rc.0: Error response from daemon: reference does not exist
	I0325 02:03:27.173302  479985 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0: Error response from daemon: reference does not exist
	I0325 02:03:27.214509  479985 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:03:27.214535  479985 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:03:27.214552  479985 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:03:27.214577  479985 start.go:348] acquiring machines lock for no-preload-20220325020326-262786: {Name:mk0b68e00c1687cd51ada59f78a2181cd58687dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:03:27.214709  479985 start.go:352] acquired machines lock for "no-preload-20220325020326-262786" in 114.007µs
	I0325 02:03:27.214747  479985 start.go:90] Provisioning new machine with config: &{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:03:27.214861  479985 start.go:127] createHost starting for "" (driver="docker")
	I0325 02:03:27.217490  479985 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0325 02:03:27.217756  479985 start.go:161] libmachine.API.Create for "no-preload-20220325020326-262786" (driver="docker")
	I0325 02:03:27.217799  479985 client.go:168] LocalClient.Create starting
	I0325 02:03:27.217866  479985 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 02:03:27.217909  479985 main.go:130] libmachine: Decoding PEM data...
	I0325 02:03:27.217933  479985 main.go:130] libmachine: Parsing certificate...
	I0325 02:03:27.217995  479985 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 02:03:27.218014  479985 main.go:130] libmachine: Decoding PEM data...
	I0325 02:03:27.218030  479985 main.go:130] libmachine: Parsing certificate...
	I0325 02:03:27.218406  479985 cli_runner.go:133] Run: docker network inspect no-preload-20220325020326-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 02:03:27.252304  479985 cli_runner.go:180] docker network inspect no-preload-20220325020326-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 02:03:27.252378  479985 network_create.go:254] running [docker network inspect no-preload-20220325020326-262786] to gather additional debugging logs...
	I0325 02:03:27.252397  479985 cli_runner.go:133] Run: docker network inspect no-preload-20220325020326-262786
	W0325 02:03:27.283905  479985 cli_runner.go:180] docker network inspect no-preload-20220325020326-262786 returned with exit code 1
	I0325 02:03:27.283942  479985 network_create.go:257] error running [docker network inspect no-preload-20220325020326-262786]: docker network inspect no-preload-20220325020326-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220325020326-262786
	I0325 02:03:27.283972  479985 network_create.go:259] output of [docker network inspect no-preload-20220325020326-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220325020326-262786
	
	** /stderr **
	I0325 02:03:27.284037  479985 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:03:27.317188  479985 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-ae7d63f7c465 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a9:90:49:95}}
	I0325 02:03:27.317820  479985 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-1432da0e302d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:55:4d:8f:aa}}
	I0325 02:03:27.318514  479985 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0000102e0] misses:0}
	I0325 02:03:27.318549  479985 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 02:03:27.318561  479985 network_create.go:106] attempt to create docker network no-preload-20220325020326-262786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0325 02:03:27.318602  479985 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220325020326-262786
	I0325 02:03:27.392388  479985 network_create.go:90] docker network no-preload-20220325020326-262786 192.168.67.0/24 created
	I0325 02:03:27.392423  479985 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20220325020326-262786" container
	I0325 02:03:27.392471  479985 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 02:03:27.427454  479985 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0
	I0325 02:03:27.430626  479985 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0
	I0325 02:03:27.432021  479985 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0
	I0325 02:03:27.432191  479985 cli_runner.go:133] Run: docker volume create no-preload-20220325020326-262786 --label name.minikube.sigs.k8s.io=no-preload-20220325020326-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 02:03:27.432486  479985 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0
	I0325 02:03:27.470787  479985 oci.go:102] Successfully created a docker volume no-preload-20220325020326-262786
	I0325 02:03:27.470864  479985 cli_runner.go:133] Run: docker run --rm --name no-preload-20220325020326-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220325020326-262786 --entrypoint /usr/bin/test -v no-preload-20220325020326-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 02:03:27.791595  479985 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 exists
	I0325 02:03:27.791646  479985 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0" took 619.731096ms
	I0325 02:03:27.791668  479985 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 succeeded
	I0325 02:03:27.943339  479985 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 exists
	I0325 02:03:27.943469  479985 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0" took 771.416584ms
	I0325 02:03:27.943501  479985 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 succeeded
	I0325 02:03:27.996011  479985 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 exists
	I0325 02:03:27.996079  479985 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0" took 824.199638ms
	I0325 02:03:27.996098  479985 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 succeeded
	I0325 02:03:27.997022  479985 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 exists
	I0325 02:03:27.997071  479985 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0" took 825.098163ms
	I0325 02:03:27.997095  479985 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 succeeded
	I0325 02:03:27.997112  479985 cache.go:87] Successfully saved all images to host disk.
	I0325 02:03:28.408229  479985 oci.go:106] Successfully prepared a docker volume no-preload-20220325020326-262786
	I0325 02:03:28.408278  479985 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	W0325 02:03:28.408327  479985 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 02:03:28.408350  479985 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 02:03:28.408430  479985 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 02:03:28.502791  479985 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20220325020326-262786 --name no-preload-20220325020326-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220325020326-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20220325020326-262786 --network no-preload-20220325020326-262786 --ip 192.168.67.2 --volume no-preload-20220325020326-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0325 02:03:28.916070  479985 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Running}}
	I0325 02:03:28.952275  479985 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:03:28.987733  479985 cli_runner.go:133] Run: docker exec no-preload-20220325020326-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 02:03:29.058604  479985 oci.go:281] the created container "no-preload-20220325020326-262786" has a running status.
	I0325 02:03:29.058644  479985 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa...
	I0325 02:03:29.240640  479985 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 02:03:29.329928  479985 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:03:29.365908  479985 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 02:03:29.365933  479985 kic_runner.go:114] Args: [docker exec --privileged no-preload-20220325020326-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 02:03:29.454807  479985 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:03:29.491124  479985 machine.go:88] provisioning docker machine ...
	I0325 02:03:29.491172  479985 ubuntu.go:169] provisioning hostname "no-preload-20220325020326-262786"
	I0325 02:03:29.491245  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:29.526424  479985 main.go:130] libmachine: Using SSH client type: native
	I0325 02:03:29.526636  479985 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49554 <nil> <nil>}
	I0325 02:03:29.526653  479985 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20220325020326-262786 && echo "no-preload-20220325020326-262786" | sudo tee /etc/hostname
	I0325 02:03:29.655617  479985 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20220325020326-262786
	
	I0325 02:03:29.655708  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:29.689419  479985 main.go:130] libmachine: Using SSH client type: native
	I0325 02:03:29.689584  479985 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49554 <nil> <nil>}
	I0325 02:03:29.689606  479985 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220325020326-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220325020326-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220325020326-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:03:29.806710  479985 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:03:29.806741  479985 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:03:29.806768  479985 ubuntu.go:177] setting up certificates
	I0325 02:03:29.806779  479985 provision.go:83] configureAuth start
	I0325 02:03:29.806828  479985 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:03:29.839632  479985 provision.go:138] copyHostCerts
	I0325 02:03:29.839689  479985 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:03:29.839696  479985 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:03:29.839766  479985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:03:29.839852  479985 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:03:29.839863  479985 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:03:29.839887  479985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:03:29.840002  479985 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:03:29.840020  479985 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:03:29.840049  479985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:03:29.840127  479985 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220325020326-262786 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220325020326-262786]
	I0325 02:03:29.969432  479985 provision.go:172] copyRemoteCerts
	I0325 02:03:29.969495  479985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:03:29.969532  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:30.004425  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:03:30.091330  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:03:30.110226  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0325 02:03:30.128058  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:03:30.145970  479985 provision.go:86] duration metric: configureAuth took 339.176007ms
	I0325 02:03:30.145998  479985 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:03:30.146148  479985 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:03:30.146159  479985 machine.go:91] provisioned docker machine in 655.005651ms
	I0325 02:03:30.146164  479985 client.go:171] LocalClient.Create took 2.928359422s
	I0325 02:03:30.146179  479985 start.go:169] duration metric: libmachine.API.Create for "no-preload-20220325020326-262786" took 2.928427299s
	I0325 02:03:30.146193  479985 start.go:302] post-start starting for "no-preload-20220325020326-262786" (driver="docker")
	I0325 02:03:30.146198  479985 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:03:30.146242  479985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:03:30.146281  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:30.181975  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:03:30.266684  479985 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:03:30.269642  479985 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:03:30.269674  479985 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:03:30.269688  479985 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:03:30.269694  479985 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:03:30.269709  479985 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:03:30.269763  479985 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:03:30.269857  479985 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:03:30.269938  479985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:03:30.276818  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:03:30.294514  479985 start.go:305] post-start completed in 148.304408ms
	I0325 02:03:30.294858  479985 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:03:30.330202  479985 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:03:30.330492  479985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:03:30.330555  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:30.364866  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:03:30.447445  479985 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:03:30.451161  479985 start.go:130] duration metric: createHost completed in 3.236287814s
	I0325 02:03:30.451181  479985 start.go:81] releasing machines lock for "no-preload-20220325020326-262786", held for 3.236447948s
	I0325 02:03:30.451268  479985 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:03:30.485891  479985 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:03:30.485935  479985 ssh_runner.go:195] Run: systemctl --version
	I0325 02:03:30.485962  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:30.485972  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:30.521626  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:03:30.522220  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:03:30.625415  479985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:03:30.635291  479985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:03:30.644855  479985 docker.go:183] disabling docker service ...
	I0325 02:03:30.644903  479985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:03:30.661362  479985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:03:30.670066  479985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:03:30.747255  479985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:03:30.821869  479985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:03:30.831306  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:03:30.844309  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:03:30.857605  479985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:03:30.864100  479985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:03:30.870097  479985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:03:30.942555  479985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:03:31.007996  479985 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:03:31.008062  479985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:03:31.011428  479985 start.go:462] Will wait 60s for crictl version
	I0325 02:03:31.011482  479985 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:03:31.033880  479985 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:03:31.033944  479985 ssh_runner.go:195] Run: containerd --version
	I0325 02:03:31.056078  479985 ssh_runner.go:195] Run: containerd --version
	I0325 02:03:31.082856  479985 out.go:176] * Preparing Kubernetes v1.23.4-rc.0 on containerd 1.5.10 ...
	I0325 02:03:31.083032  479985 cli_runner.go:133] Run: docker network inspect no-preload-20220325020326-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:03:31.116832  479985 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0325 02:03:31.120258  479985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:03:31.132225  479985 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:03:31.132316  479985 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:03:31.132369  479985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:03:31.156312  479985 containerd.go:608] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.23.4-rc.0". assuming images are not preloaded.
	I0325 02:03:31.156346  479985 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.23.4-rc.0 k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0 k8s.gcr.io/kube-scheduler:v1.23.4-rc.0 k8s.gcr.io/kube-proxy:v1.23.4-rc.0 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
	I0325 02:03:31.156428  479985 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0325 02:03:31.156440  479985 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.23.4-rc.0
	I0325 02:03:31.156460  479985 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0
	I0325 02:03:31.156466  479985 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0325 02:03:31.156517  479985 image.go:134] retrieving image: k8s.gcr.io/pause:3.6
	I0325 02:03:31.156553  479985 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.1-0
	I0325 02:03:31.156584  479985 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:03:31.156442  479985 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.23.4-rc.0
	I0325 02:03:31.156553  479985 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0325 02:03:31.156678  479985 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.23.4-rc.0
	I0325 02:03:31.157611  479985 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.23.4-rc.0: Error response from daemon: reference does not exist
	I0325 02:03:31.157689  479985 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
	I0325 02:03:31.157714  479985 image.go:180] daemon lookup for k8s.gcr.io/pause:3.6: Error response from daemon: reference does not exist
	I0325 02:03:31.157739  479985 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0325 02:03:31.157763  479985 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0: Error response from daemon: reference does not exist
	I0325 02:03:31.157779  479985 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.5.1-0: Error response from daemon: reference does not exist
	I0325 02:03:31.157611  479985 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.23.4-rc.0: Error response from daemon: reference does not exist
	I0325 02:03:31.157747  479985 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
	I0325 02:03:31.157842  479985 image.go:180] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error response from daemon: reference does not exist
	I0325 02:03:31.157877  479985 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.23.4-rc.0: Error response from daemon: reference does not exist
	I0325 02:03:31.377015  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.7"
	I0325 02:03:31.377686  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.3.1"
	I0325 02:03:31.399744  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.23.4-rc.0"
	I0325 02:03:31.400091  479985 cache_images.go:116] "docker.io/kubernetesui/metrics-scraper:v1.0.7" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.7" does not exist at hash "7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9" in container runtime
	I0325 02:03:31.400168  479985 cri.go:216] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0325 02:03:31.400211  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.400268  479985 cache_images.go:116] "docker.io/kubernetesui/dashboard:v2.3.1" needs transfer: "docker.io/kubernetesui/dashboard:v2.3.1" does not exist at hash "e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570" in container runtime
	I0325 02:03:31.400308  479985 cri.go:216] Removing image: docker.io/kubernetesui/dashboard:v2.3.1
	I0325 02:03:31.400347  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.401178  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.23.4-rc.0"
	I0325 02:03:31.402630  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.6"
	I0325 02:03:31.403377  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0"
	I0325 02:03:31.404042  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.1-0"
	I0325 02:03:31.428479  479985 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.23.4-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.23.4-rc.0" does not exist at hash "ce3b8500a91ff5210ec72ad7b6794a2eeae31dade40a2c752abb9b3dafabaca4" in container runtime
	I0325 02:03:31.428542  479985 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.23.4-rc.0
	I0325 02:03:31.428567  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.3.1
	I0325 02:03:31.428582  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.428642  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0325 02:03:31.431285  479985 cache_images.go:116] "k8s.gcr.io/pause:3.6" needs transfer: "k8s.gcr.io/pause:3.6" does not exist at hash "6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee" in container runtime
	I0325 02:03:31.431333  479985 cri.go:216] Removing image: k8s.gcr.io/pause:3.6
	I0325 02:03:31.431375  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.431382  479985 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.23.4-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.23.4-rc.0" does not exist at hash "4a82fd4414312c8e7ac073c0b5f6b2572e548bfa83a58570a08e21f4f19843df" in container runtime
	I0325 02:03:31.431437  479985 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.23.4-rc.0
	I0325 02:03:31.431473  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.436993  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.23.4-rc.0"
	I0325 02:03:31.448002  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0325 02:03:31.452874  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0325 02:03:31.484838  479985 cache_images.go:116] "k8s.gcr.io/etcd:3.5.1-0" needs transfer: "k8s.gcr.io/etcd:3.5.1-0" does not exist at hash "25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d" in container runtime
	I0325 02:03:31.484901  479985 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.1-0
	I0325 02:03:31.484909  479985 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0" does not exist at hash "9f243260866d4575ce07c22751b3e31f9ec820d9162ab2fdba2a18365aa70198" in container runtime
	I0325 02:03:31.484955  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.484960  479985 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0
	I0325 02:03:31.485067  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.513655  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.23.4-rc.0
	I0325 02:03:31.513685  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I0325 02:03:31.513711  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1
	I0325 02:03:31.513756  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.6
	I0325 02:03:31.513781  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.7
	I0325 02:03:31.513785  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.23.4-rc.0
	I0325 02:03:31.513757  479985 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.23.4-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.23.4-rc.0" does not exist at hash "abbcf459c773939424ce6eed3c91396c154f01c306775d40627d5fd4471c8030" in container runtime
	I0325 02:03:31.513827  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0
	I0325 02:03:31.513830  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.1-0
	I0325 02:03:31.513780  479985 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0325 02:03:31.513855  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.3.1
	I0325 02:03:31.513874  479985 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0325 02:03:31.513900  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.513830  479985 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.23.4-rc.0
	I0325 02:03:31.513798  479985 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0325 02:03:31.513968  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.513988  479985 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:03:31.514013  479985 ssh_runner.go:195] Run: which crictl
	I0325 02:03:31.611387  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6
	I0325 02:03:31.611472  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0
	I0325 02:03:31.611399  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0
	I0325 02:03:31.611505  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.6
	I0325 02:03:31.611551  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.23.4-rc.0
	I0325 02:03:31.611562  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.23.4-rc.0
	I0325 02:03:31.611626  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0
	I0325 02:03:31.611652  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0
	I0325 02:03:31.611703  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.1-0
	I0325 02:03:31.611710  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.7: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.7': No such file or directory
	I0325 02:03:31.611735  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 --> /var/lib/minikube/images/metrics-scraper_v1.0.7 (15031296 bytes)
	I0325 02:03:31.611712  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.23.4-rc.0
	I0325 02:03:31.611768  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0325 02:03:31.611811  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/dashboard_v2.3.1: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.3.1': No such file or directory
	I0325 02:03:31.611838  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.23.4-rc.0
	I0325 02:03:31.611840  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 --> /var/lib/minikube/images/dashboard_v2.3.1 (66936320 bytes)
	I0325 02:03:31.611877  479985 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:03:31.618128  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.23.4-rc.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.23.4-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.23.4-rc.0': No such file or directory
	I0325 02:03:31.618157  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 --> /var/lib/minikube/images/kube-apiserver_v1.23.4-rc.0 (32602112 bytes)
	I0325 02:03:31.618167  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.23.4-rc.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.23.4-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.23.4-rc.0': No such file or directory
	I0325 02:03:31.618207  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 --> /var/lib/minikube/images/kube-controller-manager_v1.23.4-rc.0 (30169088 bytes)
	I0325 02:03:31.618245  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.6: stat -c "%s %y" /var/lib/minikube/images/pause_3.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.6': No such file or directory
	I0325 02:03:31.618281  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 --> /var/lib/minikube/images/pause_3.6 (301056 bytes)
	I0325 02:03:31.703956  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.23.4-rc.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.23.4-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.23.4-rc.0': No such file or directory
	I0325 02:03:31.704002  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 --> /var/lib/minikube/images/kube-scheduler_v1.23.4-rc.0 (15133184 bytes)
	I0325 02:03:31.704092  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0325 02:03:31.704178  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0325 02:03:31.704384  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.1-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.1-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.1-0': No such file or directory
	I0325 02:03:31.704410  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 --> /var/lib/minikube/images/etcd_3.5.1-0 (98891776 bytes)
	I0325 02:03:31.710507  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0325 02:03:31.710561  479985 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0
	I0325 02:03:31.710609  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0325 02:03:31.710643  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.23.4-rc.0
	W0325 02:03:31.711253  479985 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0325 02:03:31.711332  479985 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0325 02:03:31.711270  479985 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0325 02:03:31.711366  479985 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	W0325 02:03:31.711288  479985 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0325 02:03:31.711400  479985 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	I0325 02:03:31.727322  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0325 02:03:31.727361  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0325 02:03:31.727436  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:31.761311  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:03:31.788513  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.23.4-rc.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.23.4-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.23.4-rc.0': No such file or directory
	I0325 02:03:31.788566  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 --> /var/lib/minikube/images/kube-proxy_v1.23.4-rc.0 (39276544 bytes)
	I0325 02:03:31.788633  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:31.789365  479985 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0325 02:03:31.789403  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0325 02:03:31.789450  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:03:31.840138  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:03:31.841332  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:03:31.900285  479985 containerd.go:292] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.7
	I0325 02:03:31.900362  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.7
	I0325 02:03:33.236261  479985 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.7: (1.335865771s)
	I0325 02:03:33.236294  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 from cache
	I0325 02:03:33.236322  479985 containerd.go:292] Loading image: /var/lib/minikube/images/kube-scheduler_v1.23.4-rc.0
	I0325 02:03:33.236359  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.23.4-rc.0
	I0325 02:03:34.109426  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 from cache
	I0325 02:03:34.109488  479985 containerd.go:292] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0325 02:03:34.109543  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0325 02:03:34.761168  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0325 02:03:34.761208  479985 containerd.go:292] Loading image: /var/lib/minikube/images/pause_3.6
	I0325 02:03:34.761246  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.6
	I0325 02:03:34.871505  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 from cache
	I0325 02:03:34.871552  479985 containerd.go:292] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0325 02:03:34.871599  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0325 02:03:35.333674  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0325 02:03:35.333767  479985 containerd.go:292] Loading image: /var/lib/minikube/images/kube-proxy_v1.23.4-rc.0
	I0325 02:03:35.333826  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.23.4-rc.0
	I0325 02:03:37.042469  479985 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.23.4-rc.0: (1.708615805s)
	I0325 02:03:37.042501  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 from cache
	I0325 02:03:37.042532  479985 containerd.go:292] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.23.4-rc.0
	I0325 02:03:37.042579  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.23.4-rc.0
	I0325 02:03:38.420809  479985 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.23.4-rc.0: (1.378195555s)
	I0325 02:03:38.420848  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 from cache
	I0325 02:03:38.420883  479985 containerd.go:292] Loading image: /var/lib/minikube/images/kube-apiserver_v1.23.4-rc.0
	I0325 02:03:38.420930  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.23.4-rc.0
	I0325 02:03:39.892103  479985 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.23.4-rc.0: (1.471145113s)
	I0325 02:03:39.892134  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 from cache
	I0325 02:03:39.892164  479985 containerd.go:292] Loading image: /var/lib/minikube/images/dashboard_v2.3.1
	I0325 02:03:39.892202  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.3.1
	I0325 02:03:42.751928  479985 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.3.1: (2.859699934s)
	I0325 02:03:42.751958  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 from cache
	I0325 02:03:42.751982  479985 containerd.go:292] Loading image: /var/lib/minikube/images/etcd_3.5.1-0
	I0325 02:03:42.752016  479985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.1-0
	I0325 02:03:46.510712  479985 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.1-0: (3.75867033s)
	I0325 02:03:46.510752  479985 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 from cache
	I0325 02:03:46.510778  479985 cache_images.go:123] Successfully loaded all cached images
	I0325 02:03:46.510784  479985 cache_images.go:92] LoadImages completed in 15.354425195s
	I0325 02:03:46.510839  479985 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:03:46.536837  479985 cni.go:93] Creating CNI manager for ""
	I0325 02:03:46.536866  479985 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:03:46.536879  479985 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:03:46.536892  479985 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220325020326-262786 NodeName:no-preload-20220325020326-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:03:46.537049  479985 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220325020326-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:03:46.537126  479985 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220325020326-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 02:03:46.537173  479985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4-rc.0
	I0325 02:03:46.544781  479985 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.23.4-rc.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.23.4-rc.0': No such file or directory
	
	Initiating transfer...
	I0325 02:03:46.544839  479985 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.23.4-rc.0
	I0325 02:03:46.553291  479985 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.23.4-rc.0/kubelet
	I0325 02:03:46.553374  479985 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/linux/amd64/kubectl.sha256
	I0325 02:03:46.553400  479985 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.23.4-rc.0/kubeadm
	I0325 02:03:46.553500  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl
	I0325 02:03:46.557460  479985 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.4-rc.0/kubectl': No such file or directory
	I0325 02:03:46.557502  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.23.4-rc.0/kubectl --> /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl (46587904 bytes)
	I0325 02:03:47.130760  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.4-rc.0/kubeadm
	I0325 02:03:47.135703  479985 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.4-rc.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.4-rc.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.4-rc.0/kubeadm': No such file or directory
	I0325 02:03:47.135750  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.23.4-rc.0/kubeadm --> /var/lib/minikube/binaries/v1.23.4-rc.0/kubeadm (45211648 bytes)
	I0325 02:03:47.626307  479985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:03:47.636677  479985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.4-rc.0/kubelet
	I0325 02:03:47.640032  479985 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.4-rc.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.4-rc.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.4-rc.0/kubelet': No such file or directory
	I0325 02:03:47.640077  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.23.4-rc.0/kubelet --> /var/lib/minikube/binaries/v1.23.4-rc.0/kubelet (124521440 bytes)
	I0325 02:03:47.856331  479985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:03:47.865062  479985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 02:03:47.879018  479985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0325 02:03:47.893704  479985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0325 02:03:47.909026  479985 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:03:47.912575  479985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:03:47.926187  479985 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786 for IP: 192.168.67.2
	I0325 02:03:47.926318  479985 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:03:47.926358  479985 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:03:47.926420  479985 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.key
	I0325 02:03:47.926439  479985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.crt with IP's: []
	I0325 02:03:48.219937  479985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.crt ...
	I0325 02:03:48.219975  479985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.crt: {Name:mk9159febf84dfeca17cb3f06989262e0668adc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:03:48.220227  479985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.key ...
	I0325 02:03:48.220241  479985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.key: {Name:mk49ce42a2cef38835ce97fac2949cf2851de14a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:03:48.220329  479985 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key.c7fa3a9e
	I0325 02:03:48.220346  479985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 02:03:48.399288  479985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt.c7fa3a9e ...
	I0325 02:03:48.399332  479985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt.c7fa3a9e: {Name:mk92b21fc74a242c3777059bfa70a5ea6bca2be0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:03:48.399531  479985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key.c7fa3a9e ...
	I0325 02:03:48.399545  479985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key.c7fa3a9e: {Name:mkb556124b434a1753fb2873526fa243f44ca066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:03:48.399631  479985 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt
	I0325 02:03:48.399687  479985 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key
	I0325 02:03:48.399729  479985 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key
	I0325 02:03:48.399742  479985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.crt with IP's: []
	I0325 02:03:48.624675  479985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.crt ...
	I0325 02:03:48.624711  479985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.crt: {Name:mk8dd46a9c38d1223f8ef793f0437ac8b66af261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:03:48.624932  479985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key ...
	I0325 02:03:48.624946  479985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key: {Name:mk4f74a72f2b39c7fe69855fd801b8885bb1315d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:03:48.625117  479985 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:03:48.625157  479985 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:03:48.625170  479985 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:03:48.625199  479985 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:03:48.625227  479985 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:03:48.625251  479985 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:03:48.625287  479985 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:03:48.625919  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:03:48.645111  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 02:03:48.662889  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:03:48.680862  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:03:48.698947  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:03:48.719458  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:03:48.737623  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:03:48.757066  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:03:48.775477  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:03:48.794448  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:03:48.813767  479985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:03:48.832990  479985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:03:48.846996  479985 ssh_runner.go:195] Run: openssl version
	I0325 02:03:48.852564  479985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:03:48.860734  479985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:03:48.864172  479985 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:03:48.864223  479985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:03:48.869532  479985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:03:48.877229  479985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:03:48.885455  479985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:03:48.889225  479985 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:03:48.889318  479985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:03:48.894557  479985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:03:48.902823  479985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:03:48.912007  479985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:03:48.915451  479985 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:03:48.915508  479985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:03:48.920770  479985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:03:48.928484  479985 kubeadm.go:391] StartCluster: {Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:03:48.928592  479985 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:03:48.928642  479985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:03:48.953299  479985 cri.go:87] found id: ""
	I0325 02:03:48.953367  479985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:03:48.960775  479985 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:03:48.968011  479985 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:03:48.968069  479985 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:03:48.975294  479985 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:03:48.975352  479985 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:04:04.108215  479985 out.go:203]   - Generating certificates and keys ...
	I0325 02:04:04.111551  479985 out.go:203]   - Booting up control plane ...
	I0325 02:04:04.114259  479985 out.go:203]   - Configuring RBAC rules ...
	I0325 02:04:04.116387  479985 cni.go:93] Creating CNI manager for ""
	I0325 02:04:04.116405  479985 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:04:04.118277  479985 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:04:04.118336  479985 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:04:04.122758  479985 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:04:04.122787  479985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:04:04.137318  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:04:04.990342  479985 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:04:04.990416  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:04.990461  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=no-preload-20220325020326-262786 minikube.k8s.io/updated_at=2022_03_25T02_04_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:05.091240  479985 ops.go:34] apiserver oom_adj: -16
	I0325 02:04:05.091357  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:05.645058  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:06.145903  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:06.645723  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:07.145843  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:07.645550  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:08.145911  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:08.645326  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:09.144971  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:09.645547  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:10.144971  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:10.645829  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:11.145143  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:11.645281  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:12.145823  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:12.645028  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:13.145942  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:13.645654  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:14.145794  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:14.645252  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:15.145882  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:15.645938  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:16.145782  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:16.645079  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:17.145519  479985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:04:17.204716  479985 kubeadm.go:1020] duration metric: took 12.2143439s to wait for elevateKubeSystemPrivileges.
	I0325 02:04:17.204751  479985 kubeadm.go:393] StartCluster complete in 28.27627808s
	I0325 02:04:17.204773  479985 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:04:17.204888  479985 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:04:17.206385  479985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:04:17.726382  479985 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220325020326-262786" rescaled to 1
	I0325 02:04:17.726451  479985 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:04:17.726490  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:04:17.726496  479985 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 02:04:17.729598  479985 out.go:176] * Verifying Kubernetes components...
	I0325 02:04:17.726571  479985 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220325020326-262786"
	I0325 02:04:17.729652  479985 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220325020326-262786"
	W0325 02:04:17.729665  479985 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:04:17.729676  479985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:04:17.729700  479985 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:04:17.726580  479985 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220325020326-262786"
	I0325 02:04:17.729738  479985 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220325020326-262786"
	I0325 02:04:17.727256  479985 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:04:17.730112  479985 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:04:17.730279  479985 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:04:17.773895  479985 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:04:17.774045  479985 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:04:17.774065  479985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:04:17.774127  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:04:17.785636  479985 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220325020326-262786"
	W0325 02:04:17.785665  479985 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:04:17.785691  479985 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:04:17.786271  479985 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:04:17.819846  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:04:17.822257  479985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:04:17.823776  479985 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:04:17.837338  479985 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:04:17.837367  479985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:04:17.837425  479985 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:04:17.872898  479985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49554 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:04:18.013611  479985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:04:18.103111  479985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:04:18.314639  479985 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0325 02:04:18.545444  479985 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 02:04:18.545472  479985 addons.go:417] enableAddons completed in 818.992719ms
	I0325 02:04:19.831409  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:22.331207  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:24.331676  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:26.831272  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:29.330908  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:31.331249  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:33.830884  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:35.831067  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:37.831241  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:40.331229  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:42.831000  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:45.331240  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:47.831124  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:50.331348  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:52.331860  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:54.831168  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:57.330847  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:04:59.331361  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:01.831214  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:04.331301  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:06.830561  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:09.330785  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:11.331180  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:13.831195  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:16.331268  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:18.831211  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:21.331296  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:23.331601  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:25.831278  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:28.330147  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:30.331373  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:32.830758  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:34.831079  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:36.831349  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:39.331464  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:41.831170  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:44.331184  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:46.831495  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:49.331206  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:51.831274  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:54.331309  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:56.331793  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:05:58.830942  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:00.831104  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:02.831433  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:05.331503  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:07.831032  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:10.331290  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:12.331357  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:14.830327  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:16.830802  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:18.831084  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:21.331277  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:23.331392  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:25.831388  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:28.330817  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:30.331354  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:32.331432  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:34.831181  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:37.331014  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:39.830970  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:41.831121  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:44.331398  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:46.331646  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:48.831301  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:51.331464  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:53.830986  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:55.831304  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:06:58.331087  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:00.830805  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:02.831097  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:05.331488  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:07.830778  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:10.331071  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:12.831233  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:15.330708  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:17.330916  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:19.331274  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:21.831273  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:24.331399  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:26.830770  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:29.331180  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:31.831005  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:34.331520  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:36.830666  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:38.830940  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:40.831005  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:43.333462  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:45.831003  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:47.831362  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:50.331190  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:52.831187  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:55.331482  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:57.830847  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:59.831204  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:02.331558  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:04.831111  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:06.831162  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:09.330917  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:11.331447  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:13.331743  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:15.830611  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:17.831386  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:17.834384  479985 node_ready.go:38] duration metric: took 4m0.010565517s waiting for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:08:17.837380  479985 out.go:176] 
	W0325 02:08:17.837581  479985 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:08:17.837614  479985 out.go:241] * 
	* 
	W0325 02:08:17.838700  479985 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:08:17.841114  479985 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-20220325020326-262786 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20220325020326-262786
helpers_test.go:236: (dbg) docker inspect no-preload-20220325020326-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778",
	        "Created": "2022-03-25T02:03:28.535684956Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:03:28.905976635Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hosts",
	        "LogPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778-json.log",
	        "Name": "/no-preload-20220325020326-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220325020326-262786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220325020326-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220325020326-262786",
	                "Source": "/var/lib/docker/volumes/no-preload-20220325020326-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220325020326-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "name.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f23607c3a5c08b0783c59d71ece486f3f43c024250c62ab15201265e18ba268",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49554"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49550"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49552"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49551"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7f23607c3a5c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220325020326-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f52c20ff4ed",
	                        "no-preload-20220325020326-262786"
	                    ],
	                    "NetworkID": "6fbac9304f70e9e85060797caa05d374912c7ea43808a752012c2c1abc994540",
	                    "EndpointID": "e5245d3bbce08b43baa512fa9f1a16faf8d4935ea476d70841cfec48e04346df",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220325020326-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p pause-20220325015121-262786           | pause-20220325015121-262786              | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:06 UTC | Fri, 25 Mar 2022 01:53:06 UTC |
	| delete  | -p                                       | kubernetes-upgrade-20220325015003-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:05 UTC | Fri, 25 Mar 2022 01:53:09 UTC |
	|         | kubernetes-upgrade-20220325015003-262786 |                                          |         |         |                               |                               |
	| start   | -p auto-20220325014919-262786            | auto-20220325014919-262786               | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:54 UTC | Fri, 25 Mar 2022 01:53:54 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p auto-20220325014919-262786            | auto-20220325014919-262786               | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:54 UTC | Fri, 25 Mar 2022 01:53:54 UTC |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p auto-20220325014919-262786            | auto-20220325014919-262786               | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:05 UTC | Fri, 25 Mar 2022 01:54:08 UTC |
	| start   | -p                                       | running-upgrade-20220325014921-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:37 UTC | Fri, 25 Mar 2022 01:54:11 UTC |
	|         | running-upgrade-20220325014921-262786    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | running-upgrade-20220325014921-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:11 UTC | Fri, 25 Mar 2022 01:54:22 UTC |
	|         | running-upgrade-20220325014921-262786    |                                          |         |         |                               |                               |
	| start   | -p                                       | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:09 UTC | Fri, 25 Mar 2022 01:54:40 UTC |
	|         | cilium-20220325014921-262786             |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=cilium --driver=docker             |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:45 UTC | Fri, 25 Mar 2022 01:54:45 UTC |
	|         | cilium-20220325014921-262786             |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p                                       | cilium-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:57 UTC | Fri, 25 Mar 2022 01:55:00 UTC |
	|         | cilium-20220325014921-262786             |                                          |         |         |                               |                               |
	| start   | -p                                       | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:55:00 UTC | Fri, 25 Mar 2022 01:56:12 UTC |
	|         | kindnet-20220325014920-262786            |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=kindnet --driver=docker            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:56:17 UTC | Fri, 25 Mar 2022 01:56:17 UTC |
	|         | kindnet-20220325014920-262786            |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786    | old-k8s-version-20220325015306-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:26 UTC | Fri, 25 Mar 2022 02:01:27 UTC |
	|         | logs -n 25                               |                                          |         |         |                               |                               |
	| -p      | kindnet-20220325014920-262786            | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:33 UTC | Fri, 25 Mar 2022 02:01:34 UTC |
	|         | logs -n 25                               |                                          |         |         |                               |                               |
	| delete  | -p                                       | kindnet-20220325014920-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:34 UTC | Fri, 25 Mar 2022 02:01:37 UTC |
	|         | kindnet-20220325014920-262786            |                                          |         |         |                               |                               |
	| start   | -p                                       | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:01:37 UTC | Fri, 25 Mar 2022 02:02:36 UTC |
	|         | enable-default-cni-20220325014920-262786 |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --enable-default-cni=true                |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:37 UTC | Fri, 25 Mar 2022 02:02:37 UTC |
	|         | enable-default-cni-20220325014920-262786 |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| -p      | calico-20220325014921-262786             | calico-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:55 UTC | Fri, 25 Mar 2022 02:02:55 UTC |
	|         | logs -n 25                               |                                          |         |         |                               |                               |
	| delete  | -p                                       | calico-20220325014921-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:56 UTC | Fri, 25 Mar 2022 02:02:59 UTC |
	|         | calico-20220325014921-262786             |                                          |         |         |                               |                               |
	| -p      | custom-weave-20220325014921-262786       | custom-weave-20220325014921-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:22 UTC | Fri, 25 Mar 2022 02:03:23 UTC |
	|         | logs -n 25                               |                                          |         |         |                               |                               |
	| delete  | -p                                       | custom-weave-20220325014921-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:24 UTC | Fri, 25 Mar 2022 02:03:26 UTC |
	|         | custom-weave-20220325014921-262786       |                                          |         |         |                               |                               |
	| start   | -p                                       | bridge-20220325014920-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:59 UTC | Fri, 25 Mar 2022 02:03:56 UTC |
	|         | bridge-20220325014920-262786             |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=bridge --driver=docker             |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | bridge-20220325014920-262786             | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:57 UTC | Fri, 25 Mar 2022 02:03:57 UTC |
	|         | bridge-20220325014920-262786             |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| -p      | enable-default-cni-20220325014920-262786 | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:38 UTC | Fri, 25 Mar 2022 02:07:39 UTC |
	|         | logs -n 25                               |                                          |         |         |                               |                               |
	| delete  | -p                                       | enable-default-cni-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:40 UTC | Fri, 25 Mar 2022 02:07:43 UTC |
	|         | enable-default-cni-20220325014920-262786 |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:07:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:07:43.125922  488107 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:07:43.126069  488107 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:07:43.126080  488107 out.go:310] Setting ErrFile to fd 2...
	I0325 02:07:43.126084  488107 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:07:43.126194  488107 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:07:43.126475  488107 out.go:304] Setting JSON to false
	I0325 02:07:43.128618  488107 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17135,"bootTime":1648156928,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:07:43.128710  488107 start.go:125] virtualization: kvm guest
	I0325 02:07:43.131512  488107 out.go:176] * [embed-certs-20220325020743-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:07:43.133278  488107 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:07:43.131706  488107 notify.go:193] Checking for updates...
	I0325 02:07:43.135179  488107 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:07:43.137062  488107 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:07:43.138674  488107 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:07:43.140508  488107 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:07:43.141026  488107 config.go:176] Loaded profile config "bridge-20220325014920-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:07:43.141149  488107 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:07:43.141228  488107 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 02:07:43.141301  488107 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:07:43.186432  488107 docker.go:136] docker version: linux-20.10.14
	I0325 02:07:43.186553  488107 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:07:43.287237  488107 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:63 SystemTime:2022-03-25 02:07:43.219781999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:07:43.287338  488107 docker.go:253] overlay module found
	I0325 02:07:43.290838  488107 out.go:176] * Using the docker driver based on user configuration
	I0325 02:07:43.290875  488107 start.go:284] selected driver: docker
	I0325 02:07:43.290883  488107 start.go:801] validating driver "docker" against <nil>
	I0325 02:07:43.290906  488107 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:07:43.290969  488107 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:07:43.290994  488107 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:07:43.292609  488107 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:07:43.293305  488107 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:07:43.394540  488107 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:63 SystemTime:2022-03-25 02:07:43.326441301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:07:43.394720  488107 start_flags.go:290] no existing cluster config was found, will generate one from the flags 
	I0325 02:07:43.394928  488107 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:07:43.394996  488107 cni.go:93] Creating CNI manager for ""
	I0325 02:07:43.395015  488107 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:07:43.395026  488107 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:07:43.395036  488107 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:07:43.395046  488107 start_flags.go:299] Found "CNI" CNI - setting NetworkPlugin=cni
	I0325 02:07:43.395059  488107 start_flags.go:304] config:
	{Name:embed-certs-20220325020743-262786 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:embed-certs-20220325020743-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:07:43.398806  488107 out.go:176] * Starting control plane node embed-certs-20220325020743-262786 in cluster embed-certs-20220325020743-262786
	I0325 02:07:43.398861  488107 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:07:43.400415  488107 out.go:176] * Pulling base image ...
	I0325 02:07:43.400451  488107 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:07:43.400488  488107 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:07:43.400510  488107 cache.go:57] Caching tarball of preloaded images
	I0325 02:07:43.400547  488107 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:07:43.400736  488107 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:07:43.400752  488107 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:07:43.400869  488107 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/config.json ...
	I0325 02:07:43.400899  488107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/config.json: {Name:mkfaa8650515210034c8a84c8336d9c2097ce3da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:07:43.440120  488107 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:07:43.440151  488107 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:07:43.440163  488107 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:07:43.440211  488107 start.go:348] acquiring machines lock for embed-certs-20220325020743-262786: {Name:mk09b5bda74ca4ab49b97f5fa7fb6add6f27caec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:07:43.440365  488107 start.go:352] acquired machines lock for "embed-certs-20220325020743-262786" in 129.974µs
	I0325 02:07:43.440411  488107 start.go:90] Provisioning new machine with config: &{Name:embed-certs-20220325020743-262786 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:embed-certs-20220325020743-262786 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:07:43.440520  488107 start.go:127] createHost starting for "" (driver="docker")
	I0325 02:07:43.333462  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:45.831003  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:43.443474  488107 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0325 02:07:43.443817  488107 start.go:161] libmachine.API.Create for "embed-certs-20220325020743-262786" (driver="docker")
	I0325 02:07:43.443859  488107 client.go:168] LocalClient.Create starting
	I0325 02:07:43.443932  488107 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 02:07:43.443999  488107 main.go:130] libmachine: Decoding PEM data...
	I0325 02:07:43.444020  488107 main.go:130] libmachine: Parsing certificate...
	I0325 02:07:43.444096  488107 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 02:07:43.444123  488107 main.go:130] libmachine: Decoding PEM data...
	I0325 02:07:43.444146  488107 main.go:130] libmachine: Parsing certificate...
	I0325 02:07:43.444558  488107 cli_runner.go:133] Run: docker network inspect embed-certs-20220325020743-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 02:07:43.480689  488107 cli_runner.go:180] docker network inspect embed-certs-20220325020743-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 02:07:43.480790  488107 network_create.go:254] running [docker network inspect embed-certs-20220325020743-262786] to gather additional debugging logs...
	I0325 02:07:43.480816  488107 cli_runner.go:133] Run: docker network inspect embed-certs-20220325020743-262786
	W0325 02:07:43.515542  488107 cli_runner.go:180] docker network inspect embed-certs-20220325020743-262786 returned with exit code 1
	I0325 02:07:43.515583  488107 network_create.go:257] error running [docker network inspect embed-certs-20220325020743-262786]: docker network inspect embed-certs-20220325020743-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220325020743-262786
	I0325 02:07:43.515599  488107 network_create.go:259] output of [docker network inspect embed-certs-20220325020743-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220325020743-262786
	
	** /stderr **
	I0325 02:07:43.515663  488107 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:07:43.551187  488107 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-ae7d63f7c465 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a9:90:49:95}}
	I0325 02:07:43.552130  488107 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00061a1e0] misses:0}
	I0325 02:07:43.552182  488107 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 02:07:43.552233  488107 network_create.go:106] attempt to create docker network embed-certs-20220325020743-262786 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0325 02:07:43.552292  488107 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220325020743-262786
	I0325 02:07:43.627777  488107 network_create.go:90] docker network embed-certs-20220325020743-262786 192.168.58.0/24 created
	I0325 02:07:43.627819  488107 kic.go:106] calculated static IP "192.168.58.2" for the "embed-certs-20220325020743-262786" container
	I0325 02:07:43.627877  488107 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 02:07:43.664593  488107 cli_runner.go:133] Run: docker volume create embed-certs-20220325020743-262786 --label name.minikube.sigs.k8s.io=embed-certs-20220325020743-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 02:07:43.702070  488107 oci.go:102] Successfully created a docker volume embed-certs-20220325020743-262786
	I0325 02:07:43.702177  488107 cli_runner.go:133] Run: docker run --rm --name embed-certs-20220325020743-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220325020743-262786 --entrypoint /usr/bin/test -v embed-certs-20220325020743-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 02:07:44.281376  488107 oci.go:106] Successfully prepared a docker volume embed-certs-20220325020743-262786
	I0325 02:07:44.281444  488107 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:07:44.281466  488107 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 02:07:44.281550  488107 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220325020743-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 02:07:47.831362  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:50.331190  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:52.831187  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:55.331482  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:53.723676  488107 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220325020743-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (9.442076204s)
	I0325 02:07:53.723721  488107 kic.go:188] duration metric: took 9.442250 seconds to extract preloaded images to volume
	W0325 02:07:53.723770  488107 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 02:07:53.723780  488107 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 02:07:53.723846  488107 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 02:07:53.824660  488107 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20220325020743-262786 --name embed-certs-20220325020743-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220325020743-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20220325020743-262786 --network embed-certs-20220325020743-262786 --ip 192.168.58.2 --volume embed-certs-20220325020743-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0325 02:07:54.256685  488107 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Running}}
	I0325 02:07:54.293494  488107 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:07:54.332069  488107 cli_runner.go:133] Run: docker exec embed-certs-20220325020743-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 02:07:54.401929  488107 oci.go:281] the created container "embed-certs-20220325020743-262786" has a running status.
	I0325 02:07:54.401979  488107 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa...
	I0325 02:07:54.559890  488107 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 02:07:54.651986  488107 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:07:54.691080  488107 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 02:07:54.691114  488107 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220325020743-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 02:07:54.796612  488107 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:07:54.834935  488107 machine.go:88] provisioning docker machine ...
	I0325 02:07:54.835063  488107 ubuntu.go:169] provisioning hostname "embed-certs-20220325020743-262786"
	I0325 02:07:54.835126  488107 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:07:54.873264  488107 main.go:130] libmachine: Using SSH client type: native
	I0325 02:07:54.873551  488107 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49559 <nil> <nil>}
	I0325 02:07:54.873584  488107 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220325020743-262786 && echo "embed-certs-20220325020743-262786" | sudo tee /etc/hostname
	I0325 02:07:55.010473  488107 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20220325020743-262786
	
	I0325 02:07:55.010573  488107 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:07:55.047155  488107 main.go:130] libmachine: Using SSH client type: native
	I0325 02:07:55.047335  488107 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49559 <nil> <nil>}
	I0325 02:07:55.047368  488107 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220325020743-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220325020743-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220325020743-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:07:55.167041  488107 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:07:55.167077  488107 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:07:55.167100  488107 ubuntu.go:177] setting up certificates
	I0325 02:07:55.167110  488107 provision.go:83] configureAuth start
	I0325 02:07:55.167162  488107 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:07:55.202740  488107 provision.go:138] copyHostCerts
	I0325 02:07:55.202811  488107 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:07:55.202828  488107 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:07:55.202902  488107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:07:55.203069  488107 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:07:55.203085  488107 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:07:55.203123  488107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:07:55.203206  488107 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:07:55.203219  488107 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:07:55.203251  488107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:07:55.203324  488107 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220325020743-262786 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220325020743-262786]
	I0325 02:07:55.363518  488107 provision.go:172] copyRemoteCerts
	I0325 02:07:55.363584  488107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:07:55.363631  488107 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:07:55.398439  488107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49559 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:07:55.486817  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0325 02:07:55.506554  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 02:07:55.524426  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:07:55.541727  488107 provision.go:86] duration metric: configureAuth took 374.601082ms
	I0325 02:07:55.541765  488107 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:07:55.541977  488107 config.go:176] Loaded profile config "embed-certs-20220325020743-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:07:55.541995  488107 machine.go:91] provisioned docker machine in 706.950294ms
	I0325 02:07:55.542002  488107 client.go:171] LocalClient.Create took 12.098129221s
	I0325 02:07:55.542017  488107 start.go:169] duration metric: libmachine.API.Create for "embed-certs-20220325020743-262786" took 12.098204509s
	I0325 02:07:55.542028  488107 start.go:302] post-start starting for "embed-certs-20220325020743-262786" (driver="docker")
	I0325 02:07:55.542038  488107 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:07:55.542085  488107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:07:55.542121  488107 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:07:55.576213  488107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49559 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:07:55.666869  488107 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:07:55.670081  488107 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:07:55.670114  488107 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:07:55.670130  488107 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:07:55.670139  488107 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:07:55.670152  488107 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:07:55.670271  488107 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:07:55.670368  488107 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:07:55.670478  488107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:07:55.677607  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:07:55.695210  488107 start.go:305] post-start completed in 153.160774ms
	I0325 02:07:55.695589  488107 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:07:55.730313  488107 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/config.json ...
	I0325 02:07:55.730573  488107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:07:55.730627  488107 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:07:55.766069  488107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49559 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:07:55.855640  488107 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:07:55.859813  488107 start.go:130] duration metric: createHost completed in 12.419279011s
	I0325 02:07:55.859844  488107 start.go:81] releasing machines lock for "embed-certs-20220325020743-262786", held for 12.419462478s
	I0325 02:07:55.859926  488107 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220325020743-262786
	I0325 02:07:55.895435  488107 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:07:55.895512  488107 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:07:55.895436  488107 ssh_runner.go:195] Run: systemctl --version
	I0325 02:07:55.895604  488107 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:07:55.932267  488107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49559 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:07:55.932773  488107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49559 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:07:56.047153  488107 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:07:56.058024  488107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:07:56.067621  488107 docker.go:183] disabling docker service ...
	I0325 02:07:56.067696  488107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:07:56.085501  488107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:07:56.095694  488107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:07:56.182421  488107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:07:56.258806  488107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:07:56.268803  488107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:07:56.282399  488107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:07:56.296378  488107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:07:56.303246  488107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:07:56.310040  488107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:07:56.386466  488107 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:07:56.452753  488107 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:07:56.452837  488107 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:07:56.457066  488107 start.go:462] Will wait 60s for crictl version
	I0325 02:07:56.457139  488107 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:07:56.480976  488107 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:07:56Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:07:57.830847  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:07:59.831204  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:02.331558  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:04.831111  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:06.831162  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:07.531073  488107 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:08:07.555705  488107 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:08:07.555760  488107 ssh_runner.go:195] Run: containerd --version
	I0325 02:08:07.576481  488107 ssh_runner.go:195] Run: containerd --version
	I0325 02:08:07.597574  488107 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:08:07.597652  488107 cli_runner.go:133] Run: docker network inspect embed-certs-20220325020743-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:08:07.631284  488107 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0325 02:08:07.634661  488107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:08:07.646667  488107 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:08:07.646756  488107 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:08:07.646815  488107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:08:07.670437  488107 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:08:07.670461  488107 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:08:07.670505  488107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:08:07.696638  488107 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:08:07.696664  488107 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:08:07.696709  488107 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:08:07.722988  488107 cni.go:93] Creating CNI manager for ""
	I0325 02:08:07.723019  488107 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:08:07.723034  488107 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:08:07.723055  488107 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220325020743-262786 NodeName:embed-certs-20220325020743-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs Cli
entCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:08:07.723203  488107 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220325020743-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:08:07.723338  488107 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220325020743-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:embed-certs-20220325020743-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 02:08:07.723410  488107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:08:07.731115  488107 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:08:07.731209  488107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:08:07.738523  488107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (578 bytes)
	I0325 02:08:07.751982  488107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:08:07.765832  488107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2062 bytes)
	I0325 02:08:07.779356  488107 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:08:07.782481  488107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:08:07.792595  488107 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786 for IP: 192.168.58.2
	I0325 02:08:07.792743  488107 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:08:07.792805  488107 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:08:07.792880  488107 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/client.key
	I0325 02:08:07.792902  488107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/client.crt with IP's: []
	I0325 02:08:08.050173  488107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/client.crt ...
	I0325 02:08:08.050205  488107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/client.crt: {Name:mk6782170f07e49c7380aa64d907eb93ef1eb60c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:08:08.050452  488107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/client.key ...
	I0325 02:08:08.050475  488107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/client.key: {Name:mkecae272fa0cc17b019dc14a07c17d5b9c7d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:08:08.050587  488107 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.key.cee25041
	I0325 02:08:08.050604  488107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 02:08:08.574515  488107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.crt.cee25041 ...
	I0325 02:08:08.574555  488107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.crt.cee25041: {Name:mk6a714ff0c5dc5a39613550ca4bdf0da4efb6de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:08:08.574756  488107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.key.cee25041 ...
	I0325 02:08:08.574770  488107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.key.cee25041: {Name:mk83e4fde2dc63a89530ba6258f9eed963d563e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:08:08.574853  488107 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.crt
	I0325 02:08:08.574911  488107 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.key
	I0325 02:08:08.574969  488107 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/proxy-client.key
	I0325 02:08:08.574985  488107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/proxy-client.crt with IP's: []
	I0325 02:08:08.679939  488107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/proxy-client.crt ...
	I0325 02:08:08.679983  488107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/proxy-client.crt: {Name:mkf04688d05974582d70f78b92f568bea2166362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:08:08.680192  488107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/proxy-client.key ...
	I0325 02:08:08.680237  488107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/proxy-client.key: {Name:mke98798ce6fc1a116205ca7dd558480fdc22c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:08:08.680428  488107 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:08:08.680468  488107 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:08:08.680480  488107 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:08:08.680505  488107 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:08:08.680532  488107 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:08:08.680557  488107 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:08:08.680601  488107 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:08:08.681167  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:08:08.700379  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:08:08.719488  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:08:08.741626  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220325020743-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0325 02:08:08.760380  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:08:08.778391  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:08:08.796539  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:08:08.816016  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:08:08.834375  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:08:08.852168  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:08:08.869782  488107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:08:08.887212  488107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:08:08.900589  488107 ssh_runner.go:195] Run: openssl version
	I0325 02:08:08.905408  488107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:08:08.913123  488107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:08:08.916242  488107 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:08:08.916288  488107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:08:08.921106  488107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:08:08.928790  488107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:08:08.936437  488107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:08:08.939850  488107 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:08:08.939917  488107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:08:08.944917  488107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:08:08.953059  488107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:08:08.961007  488107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:08:08.964267  488107 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:08:08.964331  488107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:08:08.969439  488107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:08:08.977271  488107 kubeadm.go:391] StartCluster: {Name:embed-certs-20220325020743-262786 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:embed-certs-20220325020743-262786 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:08:08.977370  488107 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:08:08.977414  488107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:08:09.002808  488107 cri.go:87] found id: ""
	I0325 02:08:09.002879  488107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:08:09.010932  488107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:08:09.018722  488107 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:08:09.018793  488107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:08:09.027186  488107 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:08:09.027240  488107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:08:09.296297  488107 out.go:203]   - Generating certificates and keys ...
	I0325 02:08:09.330917  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:11.331447  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:11.673250  488107 out.go:203]   - Booting up control plane ...
	I0325 02:08:13.331743  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:15.830611  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:17.831386  479985 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:08:17.834384  479985 node_ready.go:38] duration metric: took 4m0.010565517s waiting for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:08:17.837380  479985 out.go:176] 
	W0325 02:08:17.837581  479985 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:08:17.837614  479985 out.go:241] * 
	W0325 02:08:17.838700  479985 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	2dae6f150a56e       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   6384b2b4644e2
	ba9cd9bebb023       6de166512aa22       3 minutes ago        Exited              kindnet-cni               0                   6384b2b4644e2
	0f40034eeb6e5       abbcf459c7739       4 minutes ago        Running             kube-proxy                0                   3bfde249ebac0
	ca6eb75c498fb       ce3b8500a91ff       4 minutes ago        Running             kube-apiserver            0                   df4aa21cc08ee
	fad18b6ff5e71       4a82fd4414312       4 minutes ago        Running             kube-scheduler            0                   f2cefe0b290e6
	e6d0357cdf9c2       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   559dbd4a425c8
	b96c3eba0f9ad       9f243260866d4       4 minutes ago        Running             kube-controller-manager   0                   c344b873121f2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:03:29 UTC, end at Fri 2022-03-25 02:08:19 UTC. --
	Mar 25 02:04:17 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:17.637655454Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bfde249ebac013ba93acdb622b23f2f512b7dee794bf2b9bc545aba40d9dff7 pid=2225
	Mar 25 02:04:17 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:17.708025501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l6tg2,Uid:f41c6b8d-0d57-4096-af80-8e9a7da29b60,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bfde249ebac013ba93acdb622b23f2f512b7dee794bf2b9bc545aba40d9dff7\""
	Mar 25 02:04:17 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:17.710839284Z" level=info msg="CreateContainer within sandbox \"3bfde249ebac013ba93acdb622b23f2f512b7dee794bf2b9bc545aba40d9dff7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Mar 25 02:04:17 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:17.731487394Z" level=info msg="CreateContainer within sandbox \"3bfde249ebac013ba93acdb622b23f2f512b7dee794bf2b9bc545aba40d9dff7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc\""
	Mar 25 02:04:17 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:17.732560974Z" level=info msg="StartContainer for \"0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc\""
	Mar 25 02:04:17 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:17.888383522Z" level=info msg="StartContainer for \"0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc\" returns successfully"
	Mar 25 02:04:17 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:17.990054937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-nhlsm,Uid:57939cf7-016c-486a-8a08-466ff1515c1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\""
	Mar 25 02:04:17 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:17.992223743Z" level=info msg="PullImage \"kindest/kindnetd:v20210326-1e038dc5\""
	Mar 25 02:04:21 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:21.513821446Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd:v20210326-1e038dc5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 25 02:04:21 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:21.905182667Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 25 02:04:22 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:22.191376045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kindest/kindnetd:v20210326-1e038dc5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 25 02:04:22 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:22.312675001Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 25 02:04:22 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:22.313609689Z" level=info msg="PullImage \"kindest/kindnetd:v20210326-1e038dc5\" returns image reference \"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\""
	Mar 25 02:04:22 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:22.315490371Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Mar 25 02:04:22 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:22.331170398Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3\""
	Mar 25 02:04:22 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:22.331724527Z" level=info msg="StartContainer for \"ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3\""
	Mar 25 02:04:22 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:04:22.589046550Z" level=info msg="StartContainer for \"ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3\" returns successfully"
	Mar 25 02:07:02 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:07:02.834256335Z" level=info msg="shim disconnected" id=ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3
	Mar 25 02:07:02 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:07:02.834316386Z" level=warning msg="cleaning up after shim disconnected" id=ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3 namespace=k8s.io
	Mar 25 02:07:02 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:07:02.834330850Z" level=info msg="cleaning up dead shim"
	Mar 25 02:07:02 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:07:02.846743715Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:07:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2626\n"
	Mar 25 02:07:03 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:07:03.477122698Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Mar 25 02:07:03 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:07:03.493998261Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb\""
	Mar 25 02:07:03 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:07:03.494684855Z" level=info msg="StartContainer for \"2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb\""
	Mar 25 02:07:03 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:07:03.688953659Z" level=info msg="StartContainer for \"2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220325020326-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220325020326-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=no-preload-20220325020326-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_04_04_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:04:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220325020326-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:08:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:04:40 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:04:40 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:04:40 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:04:40 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220325020326-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                38254055-e8ea-4285-a000-185429061264
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.4-rc.0
	  Kube-Proxy Version:         v1.23.4-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220325020326-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-nhlsm                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-no-preload-20220325020326-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-no-preload-20220325020326-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-l6tg2                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-no-preload-20220325020326-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m10s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +0.571963] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.447955] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +0.575960] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +2.379922] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +0.575919] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.435901] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000052] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +0.579935] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.447921] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +0.571998] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +2.387887] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +1.003829] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	[  +1.027929] IPv4: martian source 10.244.0.232 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c2 02 b4 5e ae 08 06
	
	* 
	* ==> etcd [e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710] <==
	* {"level":"info","ts":"2022-03-25T02:03:58.185Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-25T02:03:59.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-25T02:03:59.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-25T02:03:59.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-03-25T02:03:59.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-03-25T02:03:59.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-03-25T02:03:59.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-03-25T02:03:59.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-03-25T02:03:59.111Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:03:59.112Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:03:59.112Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:03:59.112Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:03:59.112Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220325020326-262786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-25T02:03:59.112Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:03:59.112Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:03:59.113Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:03:59.113Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:03:59.114Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-25T02:03:59.114Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-03-25T02:04:05.663Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"242.239014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:242"}
	{"level":"info","ts":"2022-03-25T02:04:05.663Z","caller":"traceutil/trace.go:171","msg":"trace[1467328274] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:315; }","duration":"242.361798ms","start":"2022-03-25T02:04:05.421Z","end":"2022-03-25T02:04:05.663Z","steps":["trace[1467328274] 'agreement among raft nodes before linearized reading'  (duration: 56.930919ms)","trace[1467328274] 'range keys from in-memory index tree'  (duration: 185.270726ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:04:05.664Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.300628ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289939393067032833 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" mod_revision:312 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" value_size:186 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-03-25T02:04:05.664Z","caller":"traceutil/trace.go:171","msg":"trace[2014320345] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"241.496188ms","start":"2022-03-25T02:04:05.422Z","end":"2022-03-25T02:04:05.664Z","steps":["trace[2014320345] 'process raft request'  (duration: 55.599935ms)","trace[2014320345] 'compare'  (duration: 185.185272ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:04:20.449Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.997456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220325020326-262786\" ","response":"range_response_count:1 size:3937"}
	{"level":"info","ts":"2022-03-25T02:04:20.450Z","caller":"traceutil/trace.go:171","msg":"trace[497694994] range","detail":"{range_begin:/registry/minions/no-preload-20220325020326-262786; range_end:; response_count:1; response_revision:464; }","duration":"120.095305ms","start":"2022-03-25T02:04:20.329Z","end":"2022-03-25T02:04:20.449Z","steps":["trace[497694994] 'range keys from in-memory index tree'  (duration: 119.847306ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  02:08:19 up  4:46,  0 users,  load average: 0.68, 1.05, 1.56
	Linux no-preload-20220325020326-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252] <==
	* I0325 02:04:01.184668       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0325 02:04:01.184751       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 02:04:01.194414       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 02:04:01.205012       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0325 02:04:01.211748       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0325 02:04:01.215859       1 controller.go:611] quota admission added evaluator for: namespaces
	I0325 02:04:02.083908       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 02:04:02.083942       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 02:04:02.088759       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0325 02:04:02.092713       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0325 02:04:02.092732       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0325 02:04:02.491818       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 02:04:02.527326       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0325 02:04:02.631377       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0325 02:04:02.635980       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0325 02:04:02.637060       1 controller.go:611] quota admission added evaluator for: endpoints
	I0325 02:04:02.641368       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0325 02:04:03.224584       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0325 02:04:03.903586       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0325 02:04:03.912681       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0325 02:04:03.924232       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0325 02:04:09.097471       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 02:04:16.674024       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:04:17.226167       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0325 02:04:18.108269       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244] <==
	* I0325 02:04:16.294792       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0325 02:04:16.306176       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0325 02:04:16.309387       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0325 02:04:16.319146       1 shared_informer.go:247] Caches are synced for PV protection 
	I0325 02:04:16.321347       1 shared_informer.go:247] Caches are synced for cronjob 
	I0325 02:04:16.321379       1 shared_informer.go:247] Caches are synced for expand 
	I0325 02:04:16.323537       1 shared_informer.go:247] Caches are synced for GC 
	I0325 02:04:16.323595       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0325 02:04:16.324698       1 shared_informer.go:247] Caches are synced for endpoint 
	I0325 02:04:16.326019       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0325 02:04:16.327121       1 shared_informer.go:247] Caches are synced for deployment 
	I0325 02:04:16.335393       1 shared_informer.go:247] Caches are synced for stateful set 
	I0325 02:04:16.427013       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:04:16.487473       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:04:16.525760       1 shared_informer.go:247] Caches are synced for attach detach 
	I0325 02:04:16.681203       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l6tg2"
	I0325 02:04:16.682918       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nhlsm"
	I0325 02:04:16.948974       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:04:17.002745       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:04:17.002770       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0325 02:04:17.228152       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0325 02:04:17.243060       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0325 02:04:17.326473       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-4tdtv"
	I0325 02:04:17.331846       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-b9827"
	I0325 02:04:17.391815       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-4tdtv"
	
	* 
	* ==> kube-proxy [0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc] <==
	* I0325 02:04:17.997460       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0325 02:04:17.997538       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0325 02:04:17.997636       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:04:18.101927       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:04:18.101964       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:04:18.101979       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:04:18.102006       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:04:18.102515       1 server.go:656] "Version info" version="v1.23.4-rc.0"
	I0325 02:04:18.103198       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:04:18.103254       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:04:18.103468       1 config.go:317] "Starting service config controller"
	I0325 02:04:18.103487       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:04:18.204147       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0325 02:04:18.204238       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b] <==
	* W0325 02:04:01.205004       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 02:04:01.205041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0325 02:04:01.205131       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:01.205164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:01.205173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:04:01.205081       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:04:01.205195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:04:01.207071       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:04:01.207301       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0325 02:04:02.013087       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:04:02.013124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0325 02:04:02.073909       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:04:02.073940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0325 02:04:02.127514       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:02.127555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:04:02.131604       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0325 02:04:02.131636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0325 02:04:02.135598       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:04:02.135634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0325 02:04:02.165918       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:04:02.165954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:04:02.188481       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:04:02.188518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0325 02:04:02.598545       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0325 02:04:02.741384       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:03:29 UTC, end at Fri 2022-03-25 02:08:19 UTC. --
	Mar 25 02:06:19 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:19.334139    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:06:24 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:24.335357    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:06:29 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:29.336904    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:06:34 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:34.337698    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:06:39 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:39.339081    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:06:44 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:44.340151    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:06:49 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:49.341707    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:06:54 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:54.343326    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:06:59 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:06:59.345026    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:03 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:07:03.475051    1827 scope.go:110] "RemoveContainer" containerID="ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3"
	Mar 25 02:07:04 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:04.345788    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:09 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:09.347115    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:14 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:14.347716    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:19 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:19.349870    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:24 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:24.351013    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:29 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:29.352051    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:34 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:34.353039    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:39 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:39.354001    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:44 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:44.355042    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:49 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:49.356583    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:54 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:54.357556    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:07:59 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:07:59.358175    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:08:04 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:08:04.359669    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:08:09 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:08:09.361240    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:08:14 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:08:14.362605    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-b9827 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-b9827 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-b9827 storage-provisioner: exit status 1 (58.639046ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-b9827" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-b9827 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (293.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (345.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 02:04:08.154257  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128884886s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 02:04:23.779794  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13692758s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 02:04:40.295159  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143986541s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 02:05:07.980052  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134937599s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132011772s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132608425s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135521428s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0325 02:06:12.032016  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:12.037404  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:12.047715  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:12.068037  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:12.108370  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:12.188714  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:12.349670  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:12.669958  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:13.310890  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:14.591883  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:06:17.152322  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133543094s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
E0325 02:06:50.838352  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 02:06:52.995802  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149167682s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131265876s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0325 02:07:33.956861  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12360927s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220325014920-262786 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141534154s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (345.02s)
E0325 02:17:37.497840  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:18:05.183172  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:18:47.791135  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 02:18:56.094036  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 02:18:57.673477  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:19:25.357730  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (484.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [9dd8c087-1c45-4568-8a62-cd0ca3c442e4] Pending
helpers_test.go:343: "busybox" [9dd8c087-1c45-4568-8a62-cd0ca3c442e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: ***** TestStartStop/group/no-preload/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
start_stop_delete_test.go:181: TestStartStop/group/no-preload/serial/DeployApp: showing logs for failed pods as of 2022-03-25 02:16:20.534366675 +0000 UTC m=+3504.860495662
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 describe po busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context no-preload-20220325020326-262786 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qzqv (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-4qzqv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age               From               Message
----     ------            ----              ----               -------
Warning  FailedScheduling  46s (x8 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 logs busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context no-preload-20220325020326-262786 logs busybox -n default:
start_stop_delete_test.go:181: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20220325020326-262786
helpers_test.go:236: (dbg) docker inspect no-preload-20220325020326-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778",
	        "Created": "2022-03-25T02:03:28.535684956Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:03:28.905976635Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hosts",
	        "LogPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778-json.log",
	        "Name": "/no-preload-20220325020326-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220325020326-262786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220325020326-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220325020326-262786",
	                "Source": "/var/lib/docker/volumes/no-preload-20220325020326-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220325020326-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "name.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f23607c3a5c08b0783c59d71ece486f3f43c024250c62ab15201265e18ba268",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49554"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49550"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49552"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49551"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7f23607c3a5c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220325020326-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f52c20ff4ed",
	                        "no-preload-20220325020326-262786"
	                    ],
	                    "NetworkID": "6fbac9304f70e9e85060797caa05d374912c7ea43808a752012c2c1abc994540",
	                    "EndpointID": "e5245d3bbce08b43baa512fa9f1a16faf8d4935ea476d70841cfec48e04346df",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220325020326-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | enable-default-cni-20220325014920-262786         | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:40 UTC | Fri, 25 Mar 2022 02:07:43 UTC |
	|         | enable-default-cni-20220325014920-262786                   |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:18 UTC | Fri, 25 Mar 2022 02:08:19 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:43 UTC | Fri, 25 Mar 2022 02:08:42 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:51 UTC | Fri, 25 Mar 2022 02:08:52 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:52 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:29 UTC | Fri, 25 Mar 2022 02:09:30 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:31 UTC | Fri, 25 Mar 2022 02:09:32 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:32 UTC | Fri, 25 Mar 2022 02:09:33 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:33 UTC | Fri, 25 Mar 2022 02:09:39 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:39 UTC | Fri, 25 Mar 2022 02:09:39 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | bridge-20220325014920-262786                               | bridge-20220325014920-262786                     | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:52 UTC | Fri, 25 Mar 2022 02:09:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | bridge-20220325014920-262786                     | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:53 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | bridge-20220325014920-262786                               |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220325020956-262786      | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:56 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | disable-driver-mounts-20220325020956-262786                |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:14:36 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:47 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:48 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:49 UTC | Fri, 25 Mar 2022 02:14:50 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:52 UTC | Fri, 25 Mar 2022 02:14:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:51 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:16:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:16:10.857208  516439 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:16:10.857322  516439 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:10.857331  516439 out.go:310] Setting ErrFile to fd 2...
	I0325 02:16:10.857335  516439 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:10.857445  516439 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:16:10.857700  516439 out.go:304] Setting JSON to false
	I0325 02:16:10.859649  516439 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17643,"bootTime":1648156928,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:16:10.859731  516439 start.go:125] virtualization: kvm guest
	I0325 02:16:10.862412  516439 out.go:176] * [newest-cni-20220325021454-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:16:10.864106  516439 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:16:10.862611  516439 notify.go:193] Checking for updates...
	I0325 02:16:10.865924  516439 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:16:10.867399  516439 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:10.868947  516439 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:16:10.870479  516439 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:16:10.871011  516439 config.go:176] Loaded profile config "newest-cni-20220325021454-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:10.871482  516439 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:16:10.915061  516439 docker.go:136] docker version: linux-20.10.14
	I0325 02:16:10.915258  516439 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:11.017970  516439 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:10.946192925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:16:11.018120  516439 docker.go:253] overlay module found
	I0325 02:16:11.020944  516439 out.go:176] * Using the docker driver based on existing profile
	I0325 02:16:11.020986  516439 start.go:284] selected driver: docker
	I0325 02:16:11.020995  516439 start.go:801] validating driver "docker" against &{Name:newest-cni-20220325021454-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220325021454-262786 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[M
etricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:11.021125  516439 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:16:11.021175  516439 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:11.021203  516439 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:11.023062  516439 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:11.023760  516439 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:11.127698  516439 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:11.057157245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:16:11.127890  516439 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:11.127933  516439 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:11.130327  516439 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:11.130453  516439 start_flags.go:853] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0325 02:16:11.130478  516439 cni.go:93] Creating CNI manager for ""
	I0325 02:16:11.130487  516439 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:11.130497  516439 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:16:11.130506  516439 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:16:11.130515  516439 start_flags.go:304] config:
	{Name:newest-cni-20220325021454-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220325021454-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false
default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:11.132570  516439 out.go:176] * Starting control plane node newest-cni-20220325021454-262786 in cluster newest-cni-20220325021454-262786
	I0325 02:16:11.132606  516439 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:16:11.134281  516439 out.go:176] * Pulling base image ...
	I0325 02:16:11.134320  516439 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:11.134363  516439 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4
	I0325 02:16:11.134391  516439 cache.go:57] Caching tarball of preloaded images
	I0325 02:16:11.134462  516439 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:16:11.134590  516439 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:16:11.134609  516439 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.4-rc.0 on containerd
	I0325 02:16:11.134757  516439 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220325021454-262786/config.json ...
	I0325 02:16:11.171611  516439 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:16:11.171645  516439 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:16:11.171663  516439 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:16:11.171712  516439 start.go:348] acquiring machines lock for newest-cni-20220325021454-262786: {Name:mk4e896cd01057f7f0460e08ea6f76ea52e9fc11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:11.171839  516439 start.go:352] acquired machines lock for "newest-cni-20220325021454-262786" in 87.92µs
	I0325 02:16:11.171866  516439 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:16:11.171876  516439 fix.go:55] fixHost starting: 
	I0325 02:16:11.172159  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:11.206127  516439 fix.go:108] recreateIfNeeded on newest-cni-20220325021454-262786: state=Stopped err=<nil>
	W0325 02:16:11.206165  516439 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:16:10.469506  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:12.968692  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:11.208927  516439 out.go:176] * Restarting existing docker container for "newest-cni-20220325021454-262786" ...
	I0325 02:16:11.209008  516439 cli_runner.go:133] Run: docker start newest-cni-20220325021454-262786
	I0325 02:16:11.589688  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:11.626390  516439 kic.go:420] container "newest-cni-20220325021454-262786" state is running.
	I0325 02:16:11.626791  516439 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220325021454-262786
	I0325 02:16:11.662376  516439 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220325021454-262786/config.json ...
	I0325 02:16:11.662590  516439 machine.go:88] provisioning docker machine ...
	I0325 02:16:11.662617  516439 ubuntu.go:169] provisioning hostname "newest-cni-20220325021454-262786"
	I0325 02:16:11.662679  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:11.698397  516439 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:11.698632  516439 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49584 <nil> <nil>}
	I0325 02:16:11.698657  516439 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220325021454-262786 && echo "newest-cni-20220325021454-262786" | sudo tee /etc/hostname
	I0325 02:16:11.699319  516439 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47098->127.0.0.1:49584: read: connection reset by peer
	I0325 02:16:14.828030  516439 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20220325021454-262786
	
	I0325 02:16:14.828146  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:14.863689  516439 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:14.863891  516439 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49584 <nil> <nil>}
	I0325 02:16:14.863922  516439 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220325021454-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220325021454-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220325021454-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:16:14.987329  516439 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:16:14.987363  516439 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:16:14.987384  516439 ubuntu.go:177] setting up certificates
	I0325 02:16:14.987394  516439 provision.go:83] configureAuth start
	I0325 02:16:14.987449  516439 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220325021454-262786
	I0325 02:16:15.023486  516439 provision.go:138] copyHostCerts
	I0325 02:16:15.023557  516439 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:16:15.023568  516439 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:16:15.023649  516439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:16:15.023797  516439 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:16:15.023817  516439 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:16:15.023853  516439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:16:15.023988  516439 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:16:15.024001  516439 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:16:15.024036  516439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:16:15.024113  516439 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220325021454-262786 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220325021454-262786]
	I0325 02:16:15.191816  516439 provision.go:172] copyRemoteCerts
	I0325 02:16:15.191886  516439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:16:15.191924  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.226579  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.315237  516439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:16:15.334992  516439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0325 02:16:15.353220  516439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 02:16:15.370801  516439 provision.go:86] duration metric: configureAuth took 383.388958ms
	I0325 02:16:15.370835  516439 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:16:15.371067  516439 config.go:176] Loaded profile config "newest-cni-20220325021454-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:15.371086  516439 machine.go:91] provisioned docker machine in 3.708480718s
	I0325 02:16:15.371095  516439 start.go:302] post-start starting for "newest-cni-20220325021454-262786" (driver="docker")
	I0325 02:16:15.371108  516439 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:16:15.371167  516439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:16:15.371200  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.405621  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.499116  516439 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:16:15.502185  516439 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:16:15.502226  516439 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:16:15.502243  516439 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:16:15.502258  516439 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:16:15.502270  516439 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:16:15.502332  516439 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:16:15.502449  516439 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:16:15.502556  516439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:16:15.509900  516439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:15.528356  516439 start.go:305] post-start completed in 157.237952ms
	I0325 02:16:15.528444  516439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:16:15.528483  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.562513  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.647560  516439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:16:15.651753  516439 fix.go:57] fixHost completed within 4.479868063s
	I0325 02:16:15.651790  516439 start.go:81] releasing machines lock for "newest-cni-20220325021454-262786", held for 4.479925727s
	I0325 02:16:15.651894  516439 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220325021454-262786
	I0325 02:16:15.687947  516439 ssh_runner.go:195] Run: systemctl --version
	I0325 02:16:15.688009  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.688018  516439 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:16:15.688090  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.725441  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.725798  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.834086  516439 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:16:15.846106  516439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:16:15.855798  516439 docker.go:183] disabling docker service ...
	I0325 02:16:15.855858  516439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:16:14.968784  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:17.468253  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:19.468366  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:15.866222  516439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:16:15.876219  516439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:16:15.953515  516439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:16:16.027611  516439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:16:16.037575  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:16:16.052153  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:16:16.065317  516439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:16:16.072429  516439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:16:16.079342  516439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:16:16.150523  516439 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:16:16.225698  516439 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:16:16.225773  516439 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:16:16.229806  516439 start.go:462] Will wait 60s for crictl version
	I0325 02:16:16.229865  516439 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:16.256593  516439 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:16:16Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e938c238f422e       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   6384b2b4644e2
	0f40034eeb6e5       abbcf459c7739       12 minutes ago      Running             kube-proxy                0                   3bfde249ebac0
	ca6eb75c498fb       ce3b8500a91ff       12 minutes ago      Running             kube-apiserver            0                   df4aa21cc08ee
	fad18b6ff5e71       4a82fd4414312       12 minutes ago      Running             kube-scheduler            0                   f2cefe0b290e6
	e6d0357cdf9c2       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   559dbd4a425c8
	b96c3eba0f9ad       9f243260866d4       12 minutes ago      Running             kube-controller-manager   0                   c344b873121f2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:03:29 UTC, end at Fri 2022-03-25 02:16:21 UTC. --
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.017541947Z" level=warning msg="cleaning up after shim disconnected" id=2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb namespace=k8s.io
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.017558416Z" level=info msg="cleaning up dead shim"
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.029089294Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:09:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2964\n"
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.752305553Z" level=info msg="RemoveContainer for \"ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3\""
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.758385525Z" level=info msg="RemoveContainer for \"ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3\" returns successfully"
	Mar 25 02:09:56 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:56.110231270Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:09:56 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:56.125867036Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\""
	Mar 25 02:09:56 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:56.126375854Z" level=info msg="StartContainer for \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\""
	Mar 25 02:09:56 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:56.289387660Z" level=info msg="StartContainer for \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\" returns successfully"
	Mar 25 02:12:36 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:36.531653697Z" level=info msg="shim disconnected" id=5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae
	Mar 25 02:12:36 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:36.531741643Z" level=warning msg="cleaning up after shim disconnected" id=5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae namespace=k8s.io
	Mar 25 02:12:36 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:36.531757935Z" level=info msg="cleaning up dead shim"
	Mar 25 02:12:36 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:36.542075349Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:12:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3068\n"
	Mar 25 02:12:37 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:37.040589792Z" level=info msg="RemoveContainer for \"2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb\""
	Mar 25 02:12:37 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:37.045314816Z" level=info msg="RemoveContainer for \"2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb\" returns successfully"
	Mar 25 02:13:05 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:13:05.110637008Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:13:05 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:13:05.124763139Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741\""
	Mar 25 02:13:05 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:13:05.125334830Z" level=info msg="StartContainer for \"e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741\""
	Mar 25 02:13:05 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:13:05.288517762Z" level=info msg="StartContainer for \"e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741\" returns successfully"
	Mar 25 02:15:45 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:45.533664227Z" level=info msg="shim disconnected" id=e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741
	Mar 25 02:15:45 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:45.533733885Z" level=warning msg="cleaning up after shim disconnected" id=e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 namespace=k8s.io
	Mar 25 02:15:45 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:45.533745995Z" level=info msg="cleaning up dead shim"
	Mar 25 02:15:45 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:45.544548942Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:15:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3173\n"
	Mar 25 02:15:46 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:46.361290100Z" level=info msg="RemoveContainer for \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\""
	Mar 25 02:15:46 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:46.366577819Z" level=info msg="RemoveContainer for \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220325020326-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220325020326-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=no-preload-20220325020326-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_04_04_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:04:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220325020326-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:16:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:14:52 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:14:52 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:14:52 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:14:52 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220325020326-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                38254055-e8ea-4285-a000-185429061264
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.4-rc.0
	  Kube-Proxy Version:         v1.23.4-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220325020326-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-nhlsm                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-20220325020326-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-20220325020326-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-l6tg2                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-20220325020326-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 12m   kube-proxy  
	  Normal  Starting                 12m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710] <==
	* {"level":"info","ts":"2022-03-25T02:03:59.112Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:03:59.113Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:03:59.113Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:03:59.114Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-25T02:03:59.114Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-03-25T02:04:05.663Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"242.239014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:242"}
	{"level":"info","ts":"2022-03-25T02:04:05.663Z","caller":"traceutil/trace.go:171","msg":"trace[1467328274] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:315; }","duration":"242.361798ms","start":"2022-03-25T02:04:05.421Z","end":"2022-03-25T02:04:05.663Z","steps":["trace[1467328274] 'agreement among raft nodes before linearized reading'  (duration: 56.930919ms)","trace[1467328274] 'range keys from in-memory index tree'  (duration: 185.270726ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:04:05.664Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.300628ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289939393067032833 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" mod_revision:312 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" value_size:186 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-03-25T02:04:05.664Z","caller":"traceutil/trace.go:171","msg":"trace[2014320345] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"241.496188ms","start":"2022-03-25T02:04:05.422Z","end":"2022-03-25T02:04:05.664Z","steps":["trace[2014320345] 'process raft request'  (duration: 55.599935ms)","trace[2014320345] 'compare'  (duration: 185.185272ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:04:20.449Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.997456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220325020326-262786\" ","response":"range_response_count:1 size:3937"}
	{"level":"info","ts":"2022-03-25T02:04:20.450Z","caller":"traceutil/trace.go:171","msg":"trace[497694994] range","detail":"{range_begin:/registry/minions/no-preload-20220325020326-262786; range_end:; response_count:1; response_revision:464; }","duration":"120.095305ms","start":"2022-03-25T02:04:20.329Z","end":"2022-03-25T02:04:20.449Z","steps":["trace[497694994] 'range keys from in-memory index tree'  (duration: 119.847306ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-25T02:10:02.826Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.440384ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289939393067034984 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:562 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289939393067034982 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-03-25T02:10:02.826Z","caller":"traceutil/trace.go:171","msg":"trace[82602266] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"178.10231ms","start":"2022-03-25T02:10:02.648Z","end":"2022-03-25T02:10:02.826Z","steps":["trace[82602266] 'process raft request'  (duration: 67.445482ms)","trace[82602266] 'compare'  (duration: 110.315281ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:10:05.921Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.404171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-25T02:10:05.921Z","caller":"traceutil/trace.go:171","msg":"trace[1873070151] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"123.490692ms","start":"2022-03-25T02:10:05.798Z","end":"2022-03-25T02:10:05.921Z","steps":["trace[1873070151] 'range keys from in-memory index tree'  (duration: 123.300004ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-25T02:13:59.128Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":544}
	{"level":"info","ts":"2022-03-25T02:13:59.132Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":544,"took":"699.416µs"}
	{"level":"warn","ts":"2022-03-25T02:15:01.600Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"170.888803ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/system-nodes\" ","response":"range_response_count:1 size:1081"}
	{"level":"info","ts":"2022-03-25T02:15:01.601Z","caller":"traceutil/trace.go:171","msg":"trace[905906562] range","detail":"{range_begin:/registry/flowschemas/system-nodes; range_end:; response_count:1; response_revision:657; }","duration":"171.003118ms","start":"2022-03-25T02:15:01.430Z","end":"2022-03-25T02:15:01.601Z","steps":["trace[905906562] 'range keys from in-memory index tree'  (duration: 170.794138ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-25T02:15:01.600Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"237.539209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"}
	{"level":"info","ts":"2022-03-25T02:15:01.601Z","caller":"traceutil/trace.go:171","msg":"trace[1968073290] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:657; }","duration":"237.732688ms","start":"2022-03-25T02:15:01.363Z","end":"2022-03-25T02:15:01.601Z","steps":["trace[1968073290] 'range keys from in-memory index tree'  (duration: 237.419952ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-25T02:15:02.954Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"157.020747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-25T02:15:02.954Z","caller":"traceutil/trace.go:171","msg":"trace[1453806652] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:658; }","duration":"157.155714ms","start":"2022-03-25T02:15:02.797Z","end":"2022-03-25T02:15:02.954Z","steps":["trace[1453806652] 'range keys from in-memory index tree'  (duration: 156.907337ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-25T02:15:02.954Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"276.058446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-03-25T02:15:02.954Z","caller":"traceutil/trace.go:171","msg":"trace[92403723] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:658; }","duration":"276.301215ms","start":"2022-03-25T02:15:02.678Z","end":"2022-03-25T02:15:02.954Z","steps":["trace[92403723] 'agreement among raft nodes before linearized reading'  (duration: 93.691043ms)","trace[92403723] 'range keys from in-memory index tree'  (duration: 182.32369ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  02:16:22 up  4:54,  0 users,  load average: 0.85, 1.01, 1.35
	Linux no-preload-20220325020326-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252] <==
	* I0325 02:04:01.184668       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0325 02:04:01.184751       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 02:04:01.194414       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 02:04:01.205012       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0325 02:04:01.211748       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0325 02:04:01.215859       1 controller.go:611] quota admission added evaluator for: namespaces
	I0325 02:04:02.083908       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 02:04:02.083942       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 02:04:02.088759       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0325 02:04:02.092713       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0325 02:04:02.092732       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0325 02:04:02.491818       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 02:04:02.527326       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0325 02:04:02.631377       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0325 02:04:02.635980       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0325 02:04:02.637060       1 controller.go:611] quota admission added evaluator for: endpoints
	I0325 02:04:02.641368       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0325 02:04:03.224584       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0325 02:04:03.903586       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0325 02:04:03.912681       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0325 02:04:03.924232       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0325 02:04:09.097471       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 02:04:16.674024       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:04:17.226167       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0325 02:04:18.108269       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244] <==
	* I0325 02:04:16.294792       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0325 02:04:16.306176       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0325 02:04:16.309387       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0325 02:04:16.319146       1 shared_informer.go:247] Caches are synced for PV protection 
	I0325 02:04:16.321347       1 shared_informer.go:247] Caches are synced for cronjob 
	I0325 02:04:16.321379       1 shared_informer.go:247] Caches are synced for expand 
	I0325 02:04:16.323537       1 shared_informer.go:247] Caches are synced for GC 
	I0325 02:04:16.323595       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0325 02:04:16.324698       1 shared_informer.go:247] Caches are synced for endpoint 
	I0325 02:04:16.326019       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0325 02:04:16.327121       1 shared_informer.go:247] Caches are synced for deployment 
	I0325 02:04:16.335393       1 shared_informer.go:247] Caches are synced for stateful set 
	I0325 02:04:16.427013       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:04:16.487473       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:04:16.525760       1 shared_informer.go:247] Caches are synced for attach detach 
	I0325 02:04:16.681203       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l6tg2"
	I0325 02:04:16.682918       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nhlsm"
	I0325 02:04:16.948974       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:04:17.002745       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:04:17.002770       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0325 02:04:17.228152       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0325 02:04:17.243060       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0325 02:04:17.326473       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-4tdtv"
	I0325 02:04:17.331846       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-b9827"
	I0325 02:04:17.391815       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-4tdtv"
	
	* 
	* ==> kube-proxy [0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc] <==
	* I0325 02:04:17.997460       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0325 02:04:17.997538       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0325 02:04:17.997636       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:04:18.101927       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:04:18.101964       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:04:18.101979       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:04:18.102006       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:04:18.102515       1 server.go:656] "Version info" version="v1.23.4-rc.0"
	I0325 02:04:18.103198       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:04:18.103254       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:04:18.103468       1 config.go:317] "Starting service config controller"
	I0325 02:04:18.103487       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:04:18.204147       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0325 02:04:18.204238       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b] <==
	* W0325 02:04:01.205004       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 02:04:01.205041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0325 02:04:01.205131       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:01.205164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:01.205173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:04:01.205081       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:04:01.205195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:04:01.207071       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:04:01.207301       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0325 02:04:02.013087       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:04:02.013124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0325 02:04:02.073909       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:04:02.073940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0325 02:04:02.127514       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:02.127555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:04:02.131604       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0325 02:04:02.131636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0325 02:04:02.135598       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:04:02.135634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0325 02:04:02.165918       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:04:02.165954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:04:02.188481       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:04:02.188518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0325 02:04:02.598545       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0325 02:04:02.741384       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:03:29 UTC, end at Fri 2022-03-25 02:16:22 UTC. --
	Mar 25 02:15:04 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:04.460340    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:09 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:09.461517    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:14 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:14.462551    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:19 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:19.464022    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:24 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:24.465929    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:29 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:29.466577    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:34 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:34.468339    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:39 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:39.470035    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:44 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:44.471361    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:46 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:15:46.358516    1827 scope.go:110] "RemoveContainer" containerID="5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae"
	Mar 25 02:15:46 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:15:46.358882    1827 scope.go:110] "RemoveContainer" containerID="e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	Mar 25 02:15:46 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:46.359275    1827 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-nhlsm_kube-system(57939cf7-016c-486a-8a08-466ff1515c1f)\"" pod="kube-system/kindnet-nhlsm" podUID=57939cf7-016c-486a-8a08-466ff1515c1f
	Mar 25 02:15:49 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:49.472983    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:54 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:54.474149    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:58 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:15:58.107866    1827 scope.go:110] "RemoveContainer" containerID="e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	Mar 25 02:15:58 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:58.108157    1827 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-nhlsm_kube-system(57939cf7-016c-486a-8a08-466ff1515c1f)\"" pod="kube-system/kindnet-nhlsm" podUID=57939cf7-016c-486a-8a08-466ff1515c1f
	Mar 25 02:15:59 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:59.475484    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:04 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:04.476869    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:09 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:09.477805    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:10 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:16:10.108180    1827 scope.go:110] "RemoveContainer" containerID="e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	Mar 25 02:16:10 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:10.108519    1827 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-nhlsm_kube-system(57939cf7-016c-486a-8a08-466ff1515c1f)\"" pod="kube-system/kindnet-nhlsm" podUID=57939cf7-016c-486a-8a08-466ff1515c1f
	Mar 25 02:16:14 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:14.478798    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:19 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:19.480024    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:21 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:16:21.108317    1827 scope.go:110] "RemoveContainer" containerID="e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	Mar 25 02:16:21 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:21.108647    1827 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-nhlsm_kube-system(57939cf7-016c-486a-8a08-466ff1515c1f)\"" pod="kube-system/kindnet-nhlsm" podUID=57939cf7-016c-486a-8a08-466ff1515c1f
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-64897985d-b9827 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 describe pod busybox coredns-64897985d-b9827 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20220325020326-262786 describe pod busybox coredns-64897985d-b9827 storage-provisioner: exit status 1 (60.617918ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qzqv (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-4qzqv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  48s (x8 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-b9827" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20220325020326-262786 describe pod busybox coredns-64897985d-b9827 storage-provisioner: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20220325020326-262786
helpers_test.go:236: (dbg) docker inspect no-preload-20220325020326-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778",
	        "Created": "2022-03-25T02:03:28.535684956Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:03:28.905976635Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hosts",
	        "LogPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778-json.log",
	        "Name": "/no-preload-20220325020326-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220325020326-262786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220325020326-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220325020326-262786",
	                "Source": "/var/lib/docker/volumes/no-preload-20220325020326-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220325020326-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "name.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f23607c3a5c08b0783c59d71ece486f3f43c024250c62ab15201265e18ba268",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49554"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49550"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49552"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49551"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7f23607c3a5c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220325020326-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f52c20ff4ed",
	                        "no-preload-20220325020326-262786"
	                    ],
	                    "NetworkID": "6fbac9304f70e9e85060797caa05d374912c7ea43808a752012c2c1abc994540",
	                    "EndpointID": "e5245d3bbce08b43baa512fa9f1a16faf8d4935ea476d70841cfec48e04346df",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220325020326-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:18 UTC | Fri, 25 Mar 2022 02:08:19 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:43 UTC | Fri, 25 Mar 2022 02:08:42 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:51 UTC | Fri, 25 Mar 2022 02:08:52 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:52 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:29 UTC | Fri, 25 Mar 2022 02:09:30 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:31 UTC | Fri, 25 Mar 2022 02:09:32 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:32 UTC | Fri, 25 Mar 2022 02:09:33 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:33 UTC | Fri, 25 Mar 2022 02:09:39 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:39 UTC | Fri, 25 Mar 2022 02:09:39 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | bridge-20220325014920-262786                               | bridge-20220325014920-262786                     | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:52 UTC | Fri, 25 Mar 2022 02:09:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | bridge-20220325014920-262786                     | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:53 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | bridge-20220325014920-262786                               |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220325020956-262786      | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:56 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | disable-driver-mounts-20220325020956-262786                |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:14:36 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:47 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:48 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:49 UTC | Fri, 25 Mar 2022 02:14:50 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:52 UTC | Fri, 25 Mar 2022 02:14:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:51 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:16:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:16:10.857208  516439 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:16:10.857322  516439 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:10.857331  516439 out.go:310] Setting ErrFile to fd 2...
	I0325 02:16:10.857335  516439 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:10.857445  516439 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:16:10.857700  516439 out.go:304] Setting JSON to false
	I0325 02:16:10.859649  516439 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17643,"bootTime":1648156928,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:16:10.859731  516439 start.go:125] virtualization: kvm guest
	I0325 02:16:10.862412  516439 out.go:176] * [newest-cni-20220325021454-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:16:10.864106  516439 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:16:10.862611  516439 notify.go:193] Checking for updates...
	I0325 02:16:10.865924  516439 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:16:10.867399  516439 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:10.868947  516439 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:16:10.870479  516439 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:16:10.871011  516439 config.go:176] Loaded profile config "newest-cni-20220325021454-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:10.871482  516439 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:16:10.915061  516439 docker.go:136] docker version: linux-20.10.14
	I0325 02:16:10.915258  516439 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:11.017970  516439 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:10.946192925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:16:11.018120  516439 docker.go:253] overlay module found
	I0325 02:16:11.020944  516439 out.go:176] * Using the docker driver based on existing profile
	I0325 02:16:11.020986  516439 start.go:284] selected driver: docker
	I0325 02:16:11.020995  516439 start.go:801] validating driver "docker" against &{Name:newest-cni-20220325021454-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220325021454-262786 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[M
etricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:11.021125  516439 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:16:11.021175  516439 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:11.021203  516439 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:11.023062  516439 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:11.023760  516439 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:11.127698  516439 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:11.057157245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:16:11.127890  516439 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:11.127933  516439 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:11.130327  516439 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:11.130453  516439 start_flags.go:853] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0325 02:16:11.130478  516439 cni.go:93] Creating CNI manager for ""
	I0325 02:16:11.130487  516439 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:11.130497  516439 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:16:11.130506  516439 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:16:11.130515  516439 start_flags.go:304] config:
	{Name:newest-cni-20220325021454-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:newest-cni-20220325021454-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false
default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:11.132570  516439 out.go:176] * Starting control plane node newest-cni-20220325021454-262786 in cluster newest-cni-20220325021454-262786
	I0325 02:16:11.132606  516439 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:16:11.134281  516439 out.go:176] * Pulling base image ...
	I0325 02:16:11.134320  516439 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:11.134363  516439 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4
	I0325 02:16:11.134391  516439 cache.go:57] Caching tarball of preloaded images
	I0325 02:16:11.134462  516439 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:16:11.134590  516439 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:16:11.134609  516439 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.4-rc.0 on containerd
	I0325 02:16:11.134757  516439 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220325021454-262786/config.json ...
	I0325 02:16:11.171611  516439 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:16:11.171645  516439 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:16:11.171663  516439 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:16:11.171712  516439 start.go:348] acquiring machines lock for newest-cni-20220325021454-262786: {Name:mk4e896cd01057f7f0460e08ea6f76ea52e9fc11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:11.171839  516439 start.go:352] acquired machines lock for "newest-cni-20220325021454-262786" in 87.92µs
	I0325 02:16:11.171866  516439 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:16:11.171876  516439 fix.go:55] fixHost starting: 
	I0325 02:16:11.172159  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:11.206127  516439 fix.go:108] recreateIfNeeded on newest-cni-20220325021454-262786: state=Stopped err=<nil>
	W0325 02:16:11.206165  516439 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:16:10.469506  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:12.968692  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:11.208927  516439 out.go:176] * Restarting existing docker container for "newest-cni-20220325021454-262786" ...
	I0325 02:16:11.209008  516439 cli_runner.go:133] Run: docker start newest-cni-20220325021454-262786
	I0325 02:16:11.589688  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:11.626390  516439 kic.go:420] container "newest-cni-20220325021454-262786" state is running.
	I0325 02:16:11.626791  516439 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220325021454-262786
	I0325 02:16:11.662376  516439 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220325021454-262786/config.json ...
	I0325 02:16:11.662590  516439 machine.go:88] provisioning docker machine ...
	I0325 02:16:11.662617  516439 ubuntu.go:169] provisioning hostname "newest-cni-20220325021454-262786"
	I0325 02:16:11.662679  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:11.698397  516439 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:11.698632  516439 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49584 <nil> <nil>}
	I0325 02:16:11.698657  516439 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220325021454-262786 && echo "newest-cni-20220325021454-262786" | sudo tee /etc/hostname
	I0325 02:16:11.699319  516439 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47098->127.0.0.1:49584: read: connection reset by peer
	I0325 02:16:14.828030  516439 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20220325021454-262786
	
	I0325 02:16:14.828146  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:14.863689  516439 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:14.863891  516439 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49584 <nil> <nil>}
	I0325 02:16:14.863922  516439 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220325021454-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220325021454-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220325021454-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:16:14.987329  516439 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:16:14.987363  516439 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:16:14.987384  516439 ubuntu.go:177] setting up certificates
	I0325 02:16:14.987394  516439 provision.go:83] configureAuth start
	I0325 02:16:14.987449  516439 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220325021454-262786
	I0325 02:16:15.023486  516439 provision.go:138] copyHostCerts
	I0325 02:16:15.023557  516439 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:16:15.023568  516439 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:16:15.023649  516439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:16:15.023797  516439 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:16:15.023817  516439 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:16:15.023853  516439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:16:15.023988  516439 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:16:15.024001  516439 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:16:15.024036  516439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:16:15.024113  516439 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220325021454-262786 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220325021454-262786]
	I0325 02:16:15.191816  516439 provision.go:172] copyRemoteCerts
	I0325 02:16:15.191886  516439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:16:15.191924  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.226579  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.315237  516439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:16:15.334992  516439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0325 02:16:15.353220  516439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 02:16:15.370801  516439 provision.go:86] duration metric: configureAuth took 383.388958ms
	I0325 02:16:15.370835  516439 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:16:15.371067  516439 config.go:176] Loaded profile config "newest-cni-20220325021454-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:15.371086  516439 machine.go:91] provisioned docker machine in 3.708480718s
	I0325 02:16:15.371095  516439 start.go:302] post-start starting for "newest-cni-20220325021454-262786" (driver="docker")
	I0325 02:16:15.371108  516439 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:16:15.371167  516439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:16:15.371200  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.405621  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.499116  516439 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:16:15.502185  516439 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:16:15.502226  516439 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:16:15.502243  516439 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:16:15.502258  516439 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:16:15.502270  516439 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:16:15.502332  516439 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:16:15.502449  516439 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:16:15.502556  516439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:16:15.509900  516439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:15.528356  516439 start.go:305] post-start completed in 157.237952ms
	I0325 02:16:15.528444  516439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:16:15.528483  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.562513  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.647560  516439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:16:15.651753  516439 fix.go:57] fixHost completed within 4.479868063s
	I0325 02:16:15.651790  516439 start.go:81] releasing machines lock for "newest-cni-20220325021454-262786", held for 4.479925727s
	I0325 02:16:15.651894  516439 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220325021454-262786
	I0325 02:16:15.687947  516439 ssh_runner.go:195] Run: systemctl --version
	I0325 02:16:15.688009  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.688018  516439 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:16:15.688090  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:15.725441  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.725798  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:15.834086  516439 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:16:15.846106  516439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:16:15.855798  516439 docker.go:183] disabling docker service ...
	I0325 02:16:15.855858  516439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:16:14.968784  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:17.468253  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:19.468366  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:15.866222  516439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:16:15.876219  516439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:16:15.953515  516439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:16:16.027611  516439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:16:16.037575  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:16:16.052153  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:16:16.065317  516439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:16:16.072429  516439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:16:16.079342  516439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:16:16.150523  516439 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:16:16.225698  516439 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:16:16.225773  516439 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:16:16.229806  516439 start.go:462] Will wait 60s for crictl version
	I0325 02:16:16.229865  516439 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:16.256593  516439 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:16:16Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e938c238f422e       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   6384b2b4644e2
	0f40034eeb6e5       abbcf459c7739       12 minutes ago      Running             kube-proxy                0                   3bfde249ebac0
	ca6eb75c498fb       ce3b8500a91ff       12 minutes ago      Running             kube-apiserver            0                   df4aa21cc08ee
	fad18b6ff5e71       4a82fd4414312       12 minutes ago      Running             kube-scheduler            0                   f2cefe0b290e6
	e6d0357cdf9c2       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   559dbd4a425c8
	b96c3eba0f9ad       9f243260866d4       12 minutes ago      Running             kube-controller-manager   0                   c344b873121f2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:03:29 UTC, end at Fri 2022-03-25 02:16:23 UTC. --
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.017541947Z" level=warning msg="cleaning up after shim disconnected" id=2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb namespace=k8s.io
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.017558416Z" level=info msg="cleaning up dead shim"
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.029089294Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:09:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2964\n"
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.752305553Z" level=info msg="RemoveContainer for \"ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3\""
	Mar 25 02:09:44 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:44.758385525Z" level=info msg="RemoveContainer for \"ba9cd9bebb023a0ee09c02501fa27e5b19e2da687fd2241fa98b7268ecd3d0f3\" returns successfully"
	Mar 25 02:09:56 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:56.110231270Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:09:56 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:56.125867036Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\""
	Mar 25 02:09:56 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:56.126375854Z" level=info msg="StartContainer for \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\""
	Mar 25 02:09:56 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:09:56.289387660Z" level=info msg="StartContainer for \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\" returns successfully"
	Mar 25 02:12:36 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:36.531653697Z" level=info msg="shim disconnected" id=5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae
	Mar 25 02:12:36 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:36.531741643Z" level=warning msg="cleaning up after shim disconnected" id=5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae namespace=k8s.io
	Mar 25 02:12:36 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:36.531757935Z" level=info msg="cleaning up dead shim"
	Mar 25 02:12:36 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:36.542075349Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:12:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3068\n"
	Mar 25 02:12:37 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:37.040589792Z" level=info msg="RemoveContainer for \"2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb\""
	Mar 25 02:12:37 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:12:37.045314816Z" level=info msg="RemoveContainer for \"2dae6f150a56e966157e1812d0b09c8f242a61b354ba0703bb51a0f610359afb\" returns successfully"
	Mar 25 02:13:05 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:13:05.110637008Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:13:05 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:13:05.124763139Z" level=info msg="CreateContainer within sandbox \"6384b2b4644e29e3d0f13baba82f8569a1e53ec5f8459364d05edd26163230f1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741\""
	Mar 25 02:13:05 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:13:05.125334830Z" level=info msg="StartContainer for \"e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741\""
	Mar 25 02:13:05 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:13:05.288517762Z" level=info msg="StartContainer for \"e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741\" returns successfully"
	Mar 25 02:15:45 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:45.533664227Z" level=info msg="shim disconnected" id=e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741
	Mar 25 02:15:45 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:45.533733885Z" level=warning msg="cleaning up after shim disconnected" id=e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 namespace=k8s.io
	Mar 25 02:15:45 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:45.533745995Z" level=info msg="cleaning up dead shim"
	Mar 25 02:15:45 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:45.544548942Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:15:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3173\n"
	Mar 25 02:15:46 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:46.361290100Z" level=info msg="RemoveContainer for \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\""
	Mar 25 02:15:46 no-preload-20220325020326-262786 containerd[472]: time="2022-03-25T02:15:46.366577819Z" level=info msg="RemoveContainer for \"5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220325020326-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220325020326-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=no-preload-20220325020326-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_04_04_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:04:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220325020326-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:16:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:14:52 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:14:52 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:14:52 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:14:52 +0000   Fri, 25 Mar 2022 02:03:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220325020326-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                38254055-e8ea-4285-a000-185429061264
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.4-rc.0
	  Kube-Proxy Version:         v1.23.4-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220325020326-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-nhlsm                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-20220325020326-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-20220325020326-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-l6tg2                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-20220325020326-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 12m   kube-proxy  
	  Normal  Starting                 12m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710] <==
	* {"level":"info","ts":"2022-03-25T02:03:59.112Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:03:59.113Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:03:59.113Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:03:59.114Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-25T02:03:59.114Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-03-25T02:04:05.663Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"242.239014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:242"}
	{"level":"info","ts":"2022-03-25T02:04:05.663Z","caller":"traceutil/trace.go:171","msg":"trace[1467328274] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:315; }","duration":"242.361798ms","start":"2022-03-25T02:04:05.421Z","end":"2022-03-25T02:04:05.663Z","steps":["trace[1467328274] 'agreement among raft nodes before linearized reading'  (duration: 56.930919ms)","trace[1467328274] 'range keys from in-memory index tree'  (duration: 185.270726ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:04:05.664Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.300628ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289939393067032833 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" mod_revision:312 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" value_size:186 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-03-25T02:04:05.664Z","caller":"traceutil/trace.go:171","msg":"trace[2014320345] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"241.496188ms","start":"2022-03-25T02:04:05.422Z","end":"2022-03-25T02:04:05.664Z","steps":["trace[2014320345] 'process raft request'  (duration: 55.599935ms)","trace[2014320345] 'compare'  (duration: 185.185272ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:04:20.449Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.997456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220325020326-262786\" ","response":"range_response_count:1 size:3937"}
	{"level":"info","ts":"2022-03-25T02:04:20.450Z","caller":"traceutil/trace.go:171","msg":"trace[497694994] range","detail":"{range_begin:/registry/minions/no-preload-20220325020326-262786; range_end:; response_count:1; response_revision:464; }","duration":"120.095305ms","start":"2022-03-25T02:04:20.329Z","end":"2022-03-25T02:04:20.449Z","steps":["trace[497694994] 'range keys from in-memory index tree'  (duration: 119.847306ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-25T02:10:02.826Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.440384ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289939393067034984 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:562 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289939393067034982 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-03-25T02:10:02.826Z","caller":"traceutil/trace.go:171","msg":"trace[82602266] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"178.10231ms","start":"2022-03-25T02:10:02.648Z","end":"2022-03-25T02:10:02.826Z","steps":["trace[82602266] 'process raft request'  (duration: 67.445482ms)","trace[82602266] 'compare'  (duration: 110.315281ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:10:05.921Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.404171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-25T02:10:05.921Z","caller":"traceutil/trace.go:171","msg":"trace[1873070151] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"123.490692ms","start":"2022-03-25T02:10:05.798Z","end":"2022-03-25T02:10:05.921Z","steps":["trace[1873070151] 'range keys from in-memory index tree'  (duration: 123.300004ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-25T02:13:59.128Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":544}
	{"level":"info","ts":"2022-03-25T02:13:59.132Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":544,"took":"699.416µs"}
	{"level":"warn","ts":"2022-03-25T02:15:01.600Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"170.888803ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/system-nodes\" ","response":"range_response_count:1 size:1081"}
	{"level":"info","ts":"2022-03-25T02:15:01.601Z","caller":"traceutil/trace.go:171","msg":"trace[905906562] range","detail":"{range_begin:/registry/flowschemas/system-nodes; range_end:; response_count:1; response_revision:657; }","duration":"171.003118ms","start":"2022-03-25T02:15:01.430Z","end":"2022-03-25T02:15:01.601Z","steps":["trace[905906562] 'range keys from in-memory index tree'  (duration: 170.794138ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-25T02:15:01.600Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"237.539209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"}
	{"level":"info","ts":"2022-03-25T02:15:01.601Z","caller":"traceutil/trace.go:171","msg":"trace[1968073290] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:657; }","duration":"237.732688ms","start":"2022-03-25T02:15:01.363Z","end":"2022-03-25T02:15:01.601Z","steps":["trace[1968073290] 'range keys from in-memory index tree'  (duration: 237.419952ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-25T02:15:02.954Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"157.020747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-25T02:15:02.954Z","caller":"traceutil/trace.go:171","msg":"trace[1453806652] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:658; }","duration":"157.155714ms","start":"2022-03-25T02:15:02.797Z","end":"2022-03-25T02:15:02.954Z","steps":["trace[1453806652] 'range keys from in-memory index tree'  (duration: 156.907337ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-25T02:15:02.954Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"276.058446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-03-25T02:15:02.954Z","caller":"traceutil/trace.go:171","msg":"trace[92403723] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:658; }","duration":"276.301215ms","start":"2022-03-25T02:15:02.678Z","end":"2022-03-25T02:15:02.954Z","steps":["trace[92403723] 'agreement among raft nodes before linearized reading'  (duration: 93.691043ms)","trace[92403723] 'range keys from in-memory index tree'  (duration: 182.32369ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  02:16:23 up  4:54,  0 users,  load average: 1.02, 1.04, 1.36
	Linux no-preload-20220325020326-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252] <==
	* I0325 02:04:01.184668       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0325 02:04:01.184751       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 02:04:01.194414       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 02:04:01.205012       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0325 02:04:01.211748       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0325 02:04:01.215859       1 controller.go:611] quota admission added evaluator for: namespaces
	I0325 02:04:02.083908       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 02:04:02.083942       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 02:04:02.088759       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0325 02:04:02.092713       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0325 02:04:02.092732       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0325 02:04:02.491818       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 02:04:02.527326       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0325 02:04:02.631377       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0325 02:04:02.635980       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0325 02:04:02.637060       1 controller.go:611] quota admission added evaluator for: endpoints
	I0325 02:04:02.641368       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0325 02:04:03.224584       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0325 02:04:03.903586       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0325 02:04:03.912681       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0325 02:04:03.924232       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0325 02:04:09.097471       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 02:04:16.674024       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:04:17.226167       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0325 02:04:18.108269       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244] <==
	* I0325 02:04:16.294792       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0325 02:04:16.306176       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0325 02:04:16.309387       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0325 02:04:16.319146       1 shared_informer.go:247] Caches are synced for PV protection 
	I0325 02:04:16.321347       1 shared_informer.go:247] Caches are synced for cronjob 
	I0325 02:04:16.321379       1 shared_informer.go:247] Caches are synced for expand 
	I0325 02:04:16.323537       1 shared_informer.go:247] Caches are synced for GC 
	I0325 02:04:16.323595       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0325 02:04:16.324698       1 shared_informer.go:247] Caches are synced for endpoint 
	I0325 02:04:16.326019       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0325 02:04:16.327121       1 shared_informer.go:247] Caches are synced for deployment 
	I0325 02:04:16.335393       1 shared_informer.go:247] Caches are synced for stateful set 
	I0325 02:04:16.427013       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:04:16.487473       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:04:16.525760       1 shared_informer.go:247] Caches are synced for attach detach 
	I0325 02:04:16.681203       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l6tg2"
	I0325 02:04:16.682918       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nhlsm"
	I0325 02:04:16.948974       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:04:17.002745       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:04:17.002770       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0325 02:04:17.228152       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0325 02:04:17.243060       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0325 02:04:17.326473       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-4tdtv"
	I0325 02:04:17.331846       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-b9827"
	I0325 02:04:17.391815       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-4tdtv"
	
	* 
	* ==> kube-proxy [0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc] <==
	* I0325 02:04:17.997460       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0325 02:04:17.997538       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0325 02:04:17.997636       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:04:18.101927       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:04:18.101964       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:04:18.101979       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:04:18.102006       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:04:18.102515       1 server.go:656] "Version info" version="v1.23.4-rc.0"
	I0325 02:04:18.103198       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:04:18.103254       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:04:18.103468       1 config.go:317] "Starting service config controller"
	I0325 02:04:18.103487       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:04:18.204147       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0325 02:04:18.204238       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b] <==
	* W0325 02:04:01.205004       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 02:04:01.205041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0325 02:04:01.205131       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:01.205164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:01.205173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:04:01.205081       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:04:01.205195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:04:01.207071       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:04:01.207301       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0325 02:04:02.013087       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:04:02.013124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0325 02:04:02.073909       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:04:02.073940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0325 02:04:02.127514       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:04:02.127555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:04:02.131604       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0325 02:04:02.131636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0325 02:04:02.135598       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:04:02.135634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0325 02:04:02.165918       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:04:02.165954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:04:02.188481       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:04:02.188518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0325 02:04:02.598545       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0325 02:04:02.741384       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:03:29 UTC, end at Fri 2022-03-25 02:16:23 UTC. --
	Mar 25 02:15:04 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:04.460340    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:09 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:09.461517    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:14 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:14.462551    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:19 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:19.464022    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:24 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:24.465929    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:29 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:29.466577    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:34 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:34.468339    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:39 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:39.470035    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:44 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:44.471361    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:46 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:15:46.358516    1827 scope.go:110] "RemoveContainer" containerID="5689d84e1ccc0d805363d2c87e55a31681b9b4f56029bd56cf6bff5839bf21ae"
	Mar 25 02:15:46 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:15:46.358882    1827 scope.go:110] "RemoveContainer" containerID="e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	Mar 25 02:15:46 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:46.359275    1827 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-nhlsm_kube-system(57939cf7-016c-486a-8a08-466ff1515c1f)\"" pod="kube-system/kindnet-nhlsm" podUID=57939cf7-016c-486a-8a08-466ff1515c1f
	Mar 25 02:15:49 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:49.472983    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:54 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:54.474149    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:15:58 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:15:58.107866    1827 scope.go:110] "RemoveContainer" containerID="e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	Mar 25 02:15:58 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:58.108157    1827 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-nhlsm_kube-system(57939cf7-016c-486a-8a08-466ff1515c1f)\"" pod="kube-system/kindnet-nhlsm" podUID=57939cf7-016c-486a-8a08-466ff1515c1f
	Mar 25 02:15:59 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:15:59.475484    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:04 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:04.476869    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:09 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:09.477805    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:10 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:16:10.108180    1827 scope.go:110] "RemoveContainer" containerID="e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	Mar 25 02:16:10 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:10.108519    1827 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-nhlsm_kube-system(57939cf7-016c-486a-8a08-466ff1515c1f)\"" pod="kube-system/kindnet-nhlsm" podUID=57939cf7-016c-486a-8a08-466ff1515c1f
	Mar 25 02:16:14 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:14.478798    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:19 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:19.480024    1827 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:16:21 no-preload-20220325020326-262786 kubelet[1827]: I0325 02:16:21.108317    1827 scope.go:110] "RemoveContainer" containerID="e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	Mar 25 02:16:21 no-preload-20220325020326-262786 kubelet[1827]: E0325 02:16:21.108647    1827 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-nhlsm_kube-system(57939cf7-016c-486a-8a08-466ff1515c1f)\"" pod="kube-system/kindnet-nhlsm" podUID=57939cf7-016c-486a-8a08-466ff1515c1f
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-64897985d-b9827 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 describe pod busybox coredns-64897985d-b9827 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20220325020326-262786 describe pod busybox coredns-64897985d-b9827 storage-provisioner: exit status 1 (59.839702ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qzqv (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-4qzqv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  50s (x8 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-b9827" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20220325020326-262786 describe pod busybox coredns-64897985d-b9827 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (484.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (597.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0325 02:09:40.294941  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (9m54.91532801s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220325015306-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.23.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20220325015306-262786 in cluster old-k8s-version-20220325015306-262786
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220325015306-262786" ...
	* Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image kubernetesui/dashboard:v2.3.1
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 02:09:39.632687  496534 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:09:39.632813  496534 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:39.632826  496534 out.go:310] Setting ErrFile to fd 2...
	I0325 02:09:39.632832  496534 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:39.632957  496534 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:09:39.633232  496534 out.go:304] Setting JSON to false
	I0325 02:09:39.634405  496534 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17252,"bootTime":1648156928,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:09:39.634473  496534 start.go:125] virtualization: kvm guest
	I0325 02:09:39.637132  496534 out.go:176] * [old-k8s-version-20220325015306-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:09:39.638784  496534 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:09:39.637280  496534 notify.go:193] Checking for updates...
	I0325 02:09:39.640303  496534 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:09:39.641930  496534 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:09:39.643461  496534 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:09:39.644994  496534 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:09:39.645553  496534 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 02:09:39.648145  496534 out.go:176] * Kubernetes 1.23.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.3
	I0325 02:09:39.648193  496534 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:09:39.702169  496534 docker.go:136] docker version: linux-20.10.14
	I0325 02:09:39.702292  496534 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:39.814487  496534 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:39.73783493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:09:39.814657  496534 docker.go:253] overlay module found
	I0325 02:09:39.817515  496534 out.go:176] * Using the docker driver based on existing profile
	I0325 02:09:39.817559  496534 start.go:284] selected driver: docker
	I0325 02:09:39.817568  496534 start.go:801] validating driver "docker" against &{Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledSt
op:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:39.817734  496534 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:09:39.817786  496534 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:09:39.817817  496534 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 02:09:39.819770  496534 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:09:39.820663  496534 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:39.926175  496534 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:39.851048277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:09:39.926340  496534 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:09:39.926385  496534 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 02:09:39.928578  496534 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:09:39.928713  496534 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:09:39.928740  496534 cni.go:93] Creating CNI manager for ""
	I0325 02:09:39.928750  496534 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:09:39.928764  496534 start_flags.go:304] config:
	{Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:39.930723  496534 out.go:176] * Starting control plane node old-k8s-version-20220325015306-262786 in cluster old-k8s-version-20220325015306-262786
	I0325 02:09:39.930777  496534 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:09:39.932246  496534 out.go:176] * Pulling base image ...
	I0325 02:09:39.932292  496534 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 02:09:39.932329  496534 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0325 02:09:39.932346  496534 cache.go:57] Caching tarball of preloaded images
	I0325 02:09:39.932397  496534 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:09:39.932593  496534 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:09:39.932611  496534 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0325 02:09:39.932780  496534 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json ...
	I0325 02:09:39.971384  496534 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:09:39.971420  496534 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:09:39.971434  496534 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:09:39.971476  496534 start.go:348] acquiring machines lock for old-k8s-version-20220325015306-262786: {Name:mk6f712225030023aec99b26d6c356d6d62f23e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:09:39.971567  496534 start.go:352] acquired machines lock for "old-k8s-version-20220325015306-262786" in 67.858µs
	I0325 02:09:39.971587  496534 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:09:39.971592  496534 fix.go:55] fixHost starting: 
	I0325 02:09:39.971831  496534 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 02:09:40.015075  496534 fix.go:108] recreateIfNeeded on old-k8s-version-20220325015306-262786: state=Stopped err=<nil>
	W0325 02:09:40.015117  496534 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:09:40.017864  496534 out.go:176] * Restarting existing docker container for "old-k8s-version-20220325015306-262786" ...
	I0325 02:09:40.017962  496534 cli_runner.go:133] Run: docker start old-k8s-version-20220325015306-262786
	I0325 02:09:40.449231  496534 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 02:09:40.484621  496534 kic.go:420] container "old-k8s-version-20220325015306-262786" state is running.
	I0325 02:09:40.485134  496534 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 02:09:40.527891  496534 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json ...
	I0325 02:09:40.528123  496534 machine.go:88] provisioning docker machine ...
	I0325 02:09:40.528149  496534 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
	I0325 02:09:40.528219  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:09:40.565510  496534 main.go:130] libmachine: Using SSH client type: native
	I0325 02:09:40.565715  496534 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49569 <nil> <nil>}
	I0325 02:09:40.565733  496534 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
	I0325 02:09:40.566355  496534 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38992->127.0.0.1:49569: read: connection reset by peer
	I0325 02:09:43.695967  496534 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
	
	I0325 02:09:43.696047  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:09:43.731725  496534 main.go:130] libmachine: Using SSH client type: native
	I0325 02:09:43.731899  496534 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49569 <nil> <nil>}
	I0325 02:09:43.731922  496534 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:09:43.854904  496534 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:09:43.854943  496534 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:09:43.855007  496534 ubuntu.go:177] setting up certificates
	I0325 02:09:43.855022  496534 provision.go:83] configureAuth start
	I0325 02:09:43.855073  496534 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 02:09:43.887401  496534 provision.go:138] copyHostCerts
	I0325 02:09:43.887458  496534 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:09:43.887466  496534 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:09:43.887525  496534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:09:43.887617  496534 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:09:43.887628  496534 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:09:43.887650  496534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:09:43.887707  496534 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:09:43.887715  496534 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:09:43.887741  496534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:09:43.887791  496534 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
	I0325 02:09:44.058711  496534 provision.go:172] copyRemoteCerts
	I0325 02:09:44.058785  496534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:09:44.058820  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:09:44.093352  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:09:44.178481  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:09:44.196581  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0325 02:09:44.214180  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 02:09:44.231738  496534 provision.go:86] duration metric: configureAuth took 376.70148ms
	I0325 02:09:44.231765  496534 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:09:44.231935  496534 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 02:09:44.231947  496534 machine.go:91] provisioned docker machine in 3.703810145s
	I0325 02:09:44.231953  496534 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
	I0325 02:09:44.231959  496534 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:09:44.232000  496534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:09:44.232035  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:09:44.267281  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:09:44.359562  496534 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:09:44.362460  496534 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:09:44.362491  496534 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:09:44.362503  496534 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:09:44.362511  496534 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:09:44.362522  496534 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:09:44.362588  496534 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:09:44.362669  496534 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:09:44.362787  496534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:09:44.370011  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:09:44.389761  496534 start.go:305] post-start completed in 157.780619ms
	I0325 02:09:44.389850  496534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:09:44.389894  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:09:44.428885  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:09:44.520043  496534 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:09:44.524628  496534 fix.go:57] fixHost completed within 4.553025215s
	I0325 02:09:44.524660  496534 start.go:81] releasing machines lock for "old-k8s-version-20220325015306-262786", held for 4.553079858s
	I0325 02:09:44.524768  496534 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
	I0325 02:09:44.559294  496534 ssh_runner.go:195] Run: systemctl --version
	I0325 02:09:44.559338  496534 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:09:44.559360  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:09:44.559395  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:09:44.593801  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:09:44.594200  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:09:44.722764  496534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:09:44.747937  496534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:09:44.759591  496534 docker.go:183] disabling docker service ...
	I0325 02:09:44.759651  496534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:09:44.781164  496534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:09:44.792612  496534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:09:44.884528  496534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:09:45.003105  496534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:09:45.015861  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:09:45.030487  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:09:45.043994  496534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:09:45.050484  496534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:09:45.057194  496534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:09:45.149273  496534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:09:45.243631  496534 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:09:45.243709  496534 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:09:45.248214  496534 start.go:462] Will wait 60s for crictl version
	I0325 02:09:45.248291  496534 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:09:45.275301  496534 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:09:45Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:09:56.322437  496534 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:09:56.348947  496534 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:09:56.349020  496534 ssh_runner.go:195] Run: containerd --version
	I0325 02:09:56.371036  496534 ssh_runner.go:195] Run: containerd --version
	I0325 02:09:56.395649  496534 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	I0325 02:09:56.395746  496534 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:09:56.437483  496534 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0325 02:09:56.440965  496534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:09:56.454189  496534 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:09:56.454280  496534 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 02:09:56.454358  496534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:09:56.478863  496534 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:09:56.478889  496534 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:09:56.478968  496534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:09:56.506790  496534 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:09:56.506814  496534 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:09:56.506861  496534 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:09:56.534382  496534 cni.go:93] Creating CNI manager for ""
	I0325 02:09:56.534407  496534 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:09:56.534417  496534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:09:56.534432  496534 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220325015306-262786 NodeName:old-k8s-version-20220325015306-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:09:56.534591  496534 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220325015306-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220325015306-262786
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:09:56.534694  496534 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220325015306-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 02:09:56.534750  496534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0325 02:09:56.542421  496534 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:09:56.542495  496534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:09:56.549460  496534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 02:09:56.562979  496534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:09:56.577951  496534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0325 02:09:56.592248  496534 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:09:56.595272  496534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:09:56.604751  496534 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786 for IP: 192.168.76.2
	I0325 02:09:56.604865  496534 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:09:56.604900  496534 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:09:56.604970  496534 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key
	I0325 02:09:56.605043  496534 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25
	I0325 02:09:56.605077  496534 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key
	I0325 02:09:56.605175  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:09:56.605204  496534 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:09:56.605215  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:09:56.605238  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:09:56.605269  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:09:56.605293  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:09:56.605330  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:09:56.605914  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:09:56.624688  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 02:09:56.642854  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:09:56.661109  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0325 02:09:56.679335  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:09:56.697776  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:09:56.717820  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:09:56.736028  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:09:56.754670  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:09:56.773858  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:09:56.794511  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:09:56.814526  496534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:09:56.827928  496534 ssh_runner.go:195] Run: openssl version
	I0325 02:09:56.833166  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:09:56.841819  496534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:09:56.844899  496534 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:09:56.844950  496534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:09:56.849575  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:09:56.856804  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:09:56.865991  496534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:09:56.870126  496534 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:09:56.870181  496534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:09:56.875734  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:09:56.883338  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:09:56.891273  496534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:09:56.894298  496534 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:09:56.894349  496534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:09:56.899624  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:09:56.907811  496534 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:56.907928  496534 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:09:56.907967  496534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:09:56.934628  496534 cri.go:87] found id: "9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c"
	I0325 02:09:56.934666  496534 cri.go:87] found id: "f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e"
	I0325 02:09:56.934675  496534 cri.go:87] found id: "2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0"
	I0325 02:09:56.934690  496534 cri.go:87] found id: "0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95"
	I0325 02:09:56.934698  496534 cri.go:87] found id: "0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a"
	I0325 02:09:56.934707  496534 cri.go:87] found id: "1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa"
	I0325 02:09:56.934715  496534 cri.go:87] found id: ""
	I0325 02:09:56.934765  496534 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:09:56.951852  496534 cri.go:114] JSON = null
	W0325 02:09:56.951908  496534 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0325 02:09:56.951962  496534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:09:56.959522  496534 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:09:56.959549  496534 kubeadm.go:601] restartCluster start
	I0325 02:09:56.959604  496534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:09:56.966307  496534 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:56.967152  496534 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220325015306-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:09:56.967679  496534 kubeconfig.go:127] "old-k8s-version-20220325015306-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:09:56.968491  496534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:09:56.970296  496534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:09:56.977549  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:56.977608  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:56.986444  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.186898  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.187004  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:57.196337  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.387521  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.387591  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:57.397853  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.586709  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.587435  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:57.597694  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.786931  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.787030  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:57.796873  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.987229  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.987316  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.021144  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.187383  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.187493  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.199074  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.387227  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.387310  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.396431  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.586584  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.586654  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.595808  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.787151  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.787256  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.796540  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.986719  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.986822  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.995845  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.187028  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.187118  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.195937  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.387149  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.387235  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.396773  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.587224  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.587314  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.596213  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.786619  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.786707  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.796501  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.986664  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.986763  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.995703  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.995733  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.995777  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:10:00.003939  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:10:00.003973  496534 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:10:00.003983  496534 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:10:00.003999  496534 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:10:00.004053  496534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:10:00.030310  496534 cri.go:87] found id: "9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c"
	I0325 02:10:00.030347  496534 cri.go:87] found id: "f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e"
	I0325 02:10:00.030357  496534 cri.go:87] found id: "2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0"
	I0325 02:10:00.030366  496534 cri.go:87] found id: "0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95"
	I0325 02:10:00.030374  496534 cri.go:87] found id: "0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a"
	I0325 02:10:00.030382  496534 cri.go:87] found id: "1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa"
	I0325 02:10:00.030389  496534 cri.go:87] found id: ""
	I0325 02:10:00.030396  496534 cri.go:232] Stopping containers: [9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e 2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0 0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95 0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a 1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa]
	I0325 02:10:00.030441  496534 ssh_runner.go:195] Run: which crictl
	I0325 02:10:00.033546  496534 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e 2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0 0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95 0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a 1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa
	I0325 02:10:00.062332  496534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:10:00.072359  496534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:10:00.079466  496534 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Mar 25 01:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Mar 25 01:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5939 Mar 25 01:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Mar 25 01:57 /etc/kubernetes/scheduler.conf
	
	I0325 02:10:00.079531  496534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:10:00.086733  496534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:10:00.093510  496534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:10:00.100688  496534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:10:00.108477  496534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:10:00.116249  496534 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:10:00.116273  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:00.175933  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:00.770714  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:00.943605  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:01.010715  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:01.116804  496534 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:10:01.116888  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:10:01.626820  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:10:02.127178  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:10:02.192735  496534 api_server.go:71] duration metric: took 1.075932213s to wait for apiserver process to appear ...
	I0325 02:10:02.192775  496534 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:10:02.192791  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:02.193182  496534 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0325 02:10:02.693918  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:06.604627  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:10:06.604658  496534 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:10:06.693941  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:06.813382  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:10:06.813427  496534 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:10:07.194038  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:07.199671  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:10:07.199698  496534 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:10:07.693961  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:07.698533  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:10:07.698567  496534 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:10:08.193736  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:08.200650  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0325 02:10:08.209168  496534 api_server.go:140] control plane version: v1.16.0
	I0325 02:10:08.209201  496534 api_server.go:130] duration metric: took 6.016418382s to wait for apiserver health ...
	I0325 02:10:08.209214  496534 cni.go:93] Creating CNI manager for ""
	I0325 02:10:08.209222  496534 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:10:08.211937  496534 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:10:08.211995  496534 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:10:08.216151  496534 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0325 02:10:08.216175  496534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:10:08.230684  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:10:08.456529  496534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:10:08.463796  496534 system_pods.go:59] 8 kube-system pods found
	I0325 02:10:08.463830  496534 system_pods.go:61] "coredns-5644d7b6d9-trm4j" [9facf37e-d2f8-4d16-bde1-5c3063be4439] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0325 02:10:08.463837  496534 system_pods.go:61] "etcd-old-k8s-version-20220325015306-262786" [b44593d0-68c8-4a88-942a-108ed1c244c6] Running
	I0325 02:10:08.463844  496534 system_pods.go:61] "kindnet-rx7hj" [bf35a126-09fa-4db9-9aa4-2cb811bf4595] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:10:08.463851  496534 system_pods.go:61] "kube-apiserver-old-k8s-version-20220325015306-262786" [71c77eeb-8312-4550-8800-57f74f6c9c19] Running
	I0325 02:10:08.463856  496534 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220325015306-262786" [d0b14926-624e-4820-8cbd-ddceaaea8158] Running
	I0325 02:10:08.463860  496534 system_pods.go:61] "kube-proxy-wxllf" [8df13659-eaff-4414-b783-5e971e2dae50] Running
	I0325 02:10:08.463864  496534 system_pods.go:61] "kube-scheduler-old-k8s-version-20220325015306-262786" [1959bc6c-50bb-4c1a-b023-da9e0537f39b] Running
	I0325 02:10:08.463869  496534 system_pods.go:61] "storage-provisioner" [883b8731-7316-4492-8ee8-5ad30fc133c0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0325 02:10:08.463876  496534 system_pods.go:74] duration metric: took 7.324225ms to wait for pod list to return data ...
	I0325 02:10:08.463886  496534 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:10:08.466259  496534 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:10:08.466284  496534 node_conditions.go:123] node cpu capacity is 8
	I0325 02:10:08.466295  496534 node_conditions.go:105] duration metric: took 2.40246ms to run NodePressure ...
	I0325 02:10:08.466314  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:08.685667  496534 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:10:08.690463  496534 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0325 02:10:09.055023  496534 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0325 02:10:09.496283  496534 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0325 02:10:10.028872  496534 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0325 02:10:10.813640  496534 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0325 02:10:12.320569  496534 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0325 02:10:13.398744  496534 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0325 02:10:15.272709  496534 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0325 02:10:17.826836  496534 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0325 02:10:22.964529  496534 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0325 02:10:32.727631  496534 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0325 02:10:51.672096  496534 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0325 02:11:07.122083  496534 kubeadm.go:752] kubelet initialised
	I0325 02:11:07.122105  496534 kubeadm.go:753] duration metric: took 58.436406731s waiting for restarted kubelet to initialise ...
	I0325 02:11:07.122113  496534 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:11:07.126263  496534 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace to be "Ready" ...
	I0325 02:11:09.132444  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:11.632007  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:13.632324  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:16.132388  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:18.632014  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:21.132025  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:23.631983  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:25.632464  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:28.131852  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:30.632396  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:33.131560  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:35.132145  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:37.632739  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:40.132081  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:42.631618  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:44.631713  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:46.631932  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:48.632103  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:51.132391  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:53.631360  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:55.631562  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:57.632380  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:00.131822  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:02.132591  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:04.632261  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:06.632949  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:08.633116  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:11.131646  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:13.131704  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:15.132130  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:17.132393  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:19.631849  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:21.632408  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:24.132780  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:26.632411  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:28.632603  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:31.132181  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:33.132360  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:35.132525  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:37.632302  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:40.131788  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:42.131858  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:44.132177  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:46.132370  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:48.632249  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:51.131576  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:53.132752  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:55.632286  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:58.131404  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:00.132015  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:02.631784  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:04.632346  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:07.131834  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:09.132317  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:11.632370  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:14.131554  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:16.132282  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:18.132434  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:20.632257  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:23.131700  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:25.131884  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:27.132649  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:29.632386  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:32.131891  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:34.132513  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:36.632138  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:38.632532  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:41.132654  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:43.631680  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:46.132612  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:48.632420  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:51.131574  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:53.131762  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:55.131847  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:57.631529  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:59.632010  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:02.131694  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:04.132479  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:06.632419  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:09.132050  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:11.632228  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:14.131720  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:16.131756  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:18.132594  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:20.633062  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:23.131946  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:25.632672  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:28.132568  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:30.631793  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:32.632303  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:34.632640  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:36.632676  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:39.131649  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:41.132420  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:43.632300  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:46.132447  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:48.132544  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:50.631588  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:52.632056  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:54.632158  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:56.632328  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:59.131669  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:15:01.132062  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:15:03.632637  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:15:06.133221  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:15:07.129647  496534 pod_ready.go:81] duration metric: took 4m0.003344658s waiting for pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace to be "Ready" ...
	E0325 02:15:07.129679  496534 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:15:07.129706  496534 pod_ready.go:38] duration metric: took 4m0.007579495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:15:07.129739  496534 kubeadm.go:605] restartCluster took 5m10.170183992s
	W0325 02:15:07.129878  496534 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:15:07.129917  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:15:08.453828  496534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.323884931s)
	I0325 02:15:08.453931  496534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:15:08.464163  496534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:15:08.471674  496534 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:15:08.471763  496534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:15:08.479558  496534 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:15:08.479607  496534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:15:08.849035  496534 out.go:203]   - Generating certificates and keys ...
	I0325 02:15:09.916671  496534 out.go:203]   - Booting up control plane ...
	I0325 02:15:18.962909  496534 out.go:203]   - Configuring RBAC rules ...
	I0325 02:15:19.380094  496534 cni.go:93] Creating CNI manager for ""
	I0325 02:15:19.380119  496534 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:15:19.382375  496534 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:15:19.382444  496534 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:15:19.386281  496534 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0325 02:15:19.386303  496534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:15:19.400237  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:15:19.619135  496534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:15:19.619204  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=old-k8s-version-20220325015306-262786 minikube.k8s.io/updated_at=2022_03_25T02_15_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:19.619204  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:19.732135  496534 ops.go:34] apiserver oom_adj: -16
	I0325 02:15:19.732298  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:20.344683  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:20.844281  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:21.345081  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:21.845082  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:22.344726  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:22.845130  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:23.344779  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:23.844192  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:24.344756  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:24.845012  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:25.344856  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:25.844749  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:26.344150  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:26.844362  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:27.344244  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:27.844210  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:28.344213  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:28.844283  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:29.344773  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:29.844135  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:30.345099  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:30.844294  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:31.345026  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:31.844781  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:32.344293  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:32.844601  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:33.344401  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:33.844522  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:15:33.927805  496534 kubeadm.go:1020] duration metric: took 14.308654301s to wait for elevateKubeSystemPrivileges.
	I0325 02:15:33.927850  496534 kubeadm.go:393] StartCluster complete in 5m37.020046322s
	I0325 02:15:33.927878  496534 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:15:33.928129  496534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:15:33.929473  496534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:15:34.447471  496534 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220325015306-262786" rescaled to 1
	I0325 02:15:34.447546  496534 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:15:34.449592  496534 out.go:176] * Verifying Kubernetes components...
	I0325 02:15:34.447578  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:15:34.449674  496534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:15:34.447594  496534 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:15:34.447898  496534 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 02:15:34.449772  496534 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220325015306-262786"
	I0325 02:15:34.449790  496534 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220325015306-262786"
	W0325 02:15:34.449805  496534 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:15:34.449820  496534 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220325015306-262786"
	I0325 02:15:34.449839  496534 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220325015306-262786"
	I0325 02:15:34.449871  496534 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220325015306-262786"
	W0325 02:15:34.449885  496534 addons.go:165] addon dashboard should already be in state true
	I0325 02:15:34.449926  496534 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
	I0325 02:15:34.449844  496534 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
	I0325 02:15:34.449848  496534 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220325015306-262786"
	W0325 02:15:34.450095  496534 addons.go:165] addon metrics-server should already be in state true
	I0325 02:15:34.450122  496534 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
	I0325 02:15:34.449820  496534 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220325015306-262786"
	I0325 02:15:34.450201  496534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220325015306-262786"
	I0325 02:15:34.450525  496534 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 02:15:34.450528  496534 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 02:15:34.450530  496534 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 02:15:34.450602  496534 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 02:15:34.461499  496534 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 02:15:34.497718  496534 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:15:34.497910  496534 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:15:34.497931  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:15:34.497991  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:15:34.504975  496534 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:15:34.506555  496534 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:15:34.506634  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:15:34.506651  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:15:34.506740  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:15:34.512208  496534 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:15:34.512286  496534 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:15:34.512301  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:15:34.512364  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:15:34.521287  496534 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220325015306-262786"
	W0325 02:15:34.521318  496534 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:15:34.521351  496534 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
	I0325 02:15:34.521986  496534 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
	I0325 02:15:34.547907  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:15:34.557769  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:15:34.566802  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:15:34.567030  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:15:34.569631  496534 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:15:34.569658  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:15:34.569709  496534 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
	I0325 02:15:34.612129  496534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49569 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
	I0325 02:15:34.704478  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:15:34.704511  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:15:34.706154  496534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:15:34.712312  496534 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:15:34.712342  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:15:34.785412  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:15:34.785464  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:15:34.788984  496534 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:15:34.789011  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:15:34.803765  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:15:34.803803  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:15:34.808253  496534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:15:34.809728  496534 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:15:34.809757  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:15:34.884792  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:15:34.884823  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:15:34.888831  496534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:15:34.901769  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:15:34.901814  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:15:34.991123  496534 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0325 02:15:34.996946  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:15:34.996976  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:15:35.084447  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:15:35.084506  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:15:35.102332  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:15:35.102367  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:15:35.120388  496534 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:15:35.120418  496534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:15:35.196415  496534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:15:35.599300  496534 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220325015306-262786"
	I0325 02:15:36.025018  496534 out.go:176] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0325 02:15:36.025127  496534 addons.go:417] enableAddons completed in 1.577535544s
	I0325 02:15:36.468081  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:38.968700  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:41.468368  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:43.968011  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:45.968070  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:48.467848  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:50.468761  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:52.968215  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:55.468633  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:57.968579  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:15:59.968842  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:02.467890  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:04.468072  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:06.468115  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:08.468457  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:10.469506  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:12.968692  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:14.968784  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:17.468253  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:19.468366  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:21.968694  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:24.468663  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:26.468974  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:28.968641  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:30.969000  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:33.468436  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:35.968327  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:37.968941  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:40.468819  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:42.968016  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:44.968453  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:47.468552  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:49.968284  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.467868  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:54.468272  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:56.468419  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:58.968213  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:00.968915  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:03.468075  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:05.468506  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:07.968498  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:09.968693  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:12.468173  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:14.968191  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:17.468172  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:19.968292  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:22.468166  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:24.968021  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:27.468041  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:29.468565  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:31.968178  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:33.968823  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:36.468695  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:38.967993  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:41.468821  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:43.968154  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:45.968404  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:47.968532  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:50.468244  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:52.468797  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:54.468960  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:56.968701  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:59.467918  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:01.468256  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:03.967939  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:05.968665  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:08.467884  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:10.468279  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:12.468416  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:14.967919  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:16.968150  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:19.468065  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:21.468475  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:23.968850  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:26.468766  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:28.968612  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:30.968779  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:33.468295  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:35.468741  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:37.968661  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:40.468313  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:42.468818  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:44.968325  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:47.468369  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:49.968304  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:51.968856  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:54.468654  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:56.968573  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:58.968977  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:01.470174  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:03.968282  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:05.968412  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:07.968843  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:10.467818  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:12.468731  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:14.967929  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:16.968185  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:18.968894  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:21.468097  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:23.468504  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:25.968086  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:28.467817  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:30.467966  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:32.967797  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:34.470135  496534 node_ready.go:38] duration metric: took 4m0.008592307s waiting for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 02:19:34.472535  496534 out.go:176] 
	W0325 02:19:34.472693  496534 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:19:34.472714  496534 out.go:241] * 
	* 
	W0325 02:19:34.473654  496534 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:19:34.474914  496534 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220325015306-262786
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220325015306-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b",
	        "Created": "2022-03-25T01:56:43.297059247Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:09:40.440687134Z",
	            "FinishedAt": "2022-03-25T02:09:39.001215404Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hosts",
	        "LogPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b-json.log",
	        "Name": "/old-k8s-version-20220325015306-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220325015306-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220325015306-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220325015306-262786",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220325015306-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220325015306-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9df45f20e753a3442300e56c843c60395eccdf6e8a137107895ab514717212ce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49569"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49568"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49565"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49567"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49566"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9df45f20e753",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220325015306-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e6a4c0e8f4c7",
	                        "old-k8s-version-20220325015306-262786"
	                    ],
	                    "NetworkID": "739cf1dc095b5d758dfcb21f6f999d4a170c6b33046de4a26204586f05d2d4a4",
	                    "EndpointID": "57c238ff56f27a16e123eaa684322d154da1947f1f6746c5d5637556a31c9292",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220325015306-262786 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20220325015306-262786 logs -n 25: (1.062647024s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | bridge-20220325014920-262786                               | bridge-20220325014920-262786                     | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:52 UTC | Fri, 25 Mar 2022 02:09:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | bridge-20220325014920-262786                     | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:53 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | bridge-20220325014920-262786                               |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220325020956-262786      | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:56 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | disable-driver-mounts-20220325020956-262786                |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:14:36 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:47 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:48 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:49 UTC | Fri, 25 Mar 2022 02:14:50 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:52 UTC | Fri, 25 Mar 2022 02:14:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:51 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:23 UTC | Fri, 25 Mar 2022 02:16:24 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:24 UTC | Fri, 25 Mar 2022 02:16:25 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:25 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:35 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:46 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:47 UTC | Fri, 25 Mar 2022 02:16:48 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:48 UTC | Fri, 25 Mar 2022 02:16:51 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:51 UTC | Fri, 25 Mar 2022 02:16:52 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:16:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:16:35.482311  519649 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:16:35.482451  519649 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:35.482462  519649 out.go:310] Setting ErrFile to fd 2...
	I0325 02:16:35.482467  519649 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:35.482575  519649 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:16:35.482813  519649 out.go:304] Setting JSON to false
	I0325 02:16:35.484309  519649 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17668,"bootTime":1648156928,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:16:35.484382  519649 start.go:125] virtualization: kvm guest
	I0325 02:16:35.487068  519649 out.go:176] * [no-preload-20220325020326-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:16:35.488730  519649 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:16:35.487298  519649 notify.go:193] Checking for updates...
	I0325 02:16:35.490311  519649 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:16:35.491877  519649 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:35.493486  519649 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:16:35.495057  519649 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:16:35.496266  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:35.497491  519649 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:16:35.540694  519649 docker.go:136] docker version: linux-20.10.14
	I0325 02:16:35.540841  519649 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:35.641548  519649 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:35.575580325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:16:35.641678  519649 docker.go:253] overlay module found
	I0325 02:16:35.644240  519649 out.go:176] * Using the docker driver based on existing profile
	I0325 02:16:35.644293  519649 start.go:284] selected driver: docker
	I0325 02:16:35.644302  519649 start.go:801] validating driver "docker" against &{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:35.644458  519649 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:16:35.644501  519649 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:35.644530  519649 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:35.646030  519649 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:35.646742  519649 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:35.752278  519649 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:35.682730162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:16:35.752465  519649 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:35.752492  519649 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:35.754658  519649 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:35.754778  519649 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:16:35.754810  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:16:35.754821  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:35.754840  519649 start_flags.go:304] config:
	{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:35.756791  519649 out.go:176] * Starting control plane node no-preload-20220325020326-262786 in cluster no-preload-20220325020326-262786
	I0325 02:16:35.756829  519649 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:16:35.758358  519649 out.go:176] * Pulling base image ...
	I0325 02:16:35.758390  519649 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:35.758492  519649 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:16:35.758563  519649 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:16:35.758688  519649 cache.go:107] acquiring lock: {Name:mkadc5033eb4d9179acd1c6e7ff0e25d4981568c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758710  519649 cache.go:107] acquiring lock: {Name:mk0987b0339865c5416a6746bce8670ad78c0a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758707  519649 cache.go:107] acquiring lock: {Name:mkdc6a82c5ad28a9b97463884b87944eaef2fef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758830  519649 cache.go:107] acquiring lock: {Name:mk140b8e2c06d387b642b813a7efd82a9f19d6c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758829  519649 cache.go:107] acquiring lock: {Name:mk8ed79f1ecf0bc83b0d3ead06534032f65db356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758880  519649 cache.go:107] acquiring lock: {Name:mkcb4c0577b6fb6a4cc15cd1cfc04742789dcc24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758920  519649 cache.go:107] acquiring lock: {Name:mk1134717661547774a1dd6d6e2854162646543d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758911  519649 cache.go:107] acquiring lock: {Name:mk61dd10aefdeb5283d07e3024688797852e36d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759022  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0325 02:16:35.759030  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 exists
	I0325 02:16:35.759047  519649 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 372.469µs
	I0325 02:16:35.759047  519649 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0" took 131.834µs
	I0325 02:16:35.759061  519649 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0325 02:16:35.759064  519649 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 succeeded
	I0325 02:16:35.758904  519649 cache.go:107] acquiring lock: {Name:mkcf6d57389d13d4e31240b1cdf9af5455cf82f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759073  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 exists
	I0325 02:16:35.759078  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0325 02:16:35.759099  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0325 02:16:35.759090  519649 cache.go:107] acquiring lock: {Name:mkd382d09a068cdb98cdc085f7d3d174faef8f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759109  519649 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 210.056µs
	I0325 02:16:35.759116  519649 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0325 02:16:35.759104  519649 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 350.331µs
	I0325 02:16:35.759086  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0325 02:16:35.759124  519649 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0325 02:16:35.759102  519649 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0" took 354.111µs
	I0325 02:16:35.759149  519649 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759143  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 exists
	I0325 02:16:35.759144  519649 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 439.796µs
	I0325 02:16:35.759168  519649 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0325 02:16:35.759127  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0325 02:16:35.759167  519649 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0" took 339.705µs
	I0325 02:16:35.759178  519649 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759105  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0325 02:16:35.759188  519649 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 362.557µs
	I0325 02:16:35.759203  519649 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0325 02:16:35.759199  519649 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 504.454µs
	I0325 02:16:35.759217  519649 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0325 02:16:35.759228  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 exists
	I0325 02:16:35.759276  519649 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0" took 279.744µs
	I0325 02:16:35.759305  519649 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759331  519649 cache.go:87] Successfully saved all images to host disk.
	I0325 02:16:35.794208  519649 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:16:35.794250  519649 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:16:35.794266  519649 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:16:35.794300  519649 start.go:348] acquiring machines lock for no-preload-20220325020326-262786: {Name:mk0b68e00c1687cd51ada59f78a2181cd58687dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.794388  519649 start.go:352] acquired machines lock for "no-preload-20220325020326-262786" in 69.622µs
	I0325 02:16:35.794408  519649 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:16:35.794412  519649 fix.go:55] fixHost starting: 
	I0325 02:16:35.794639  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:16:35.829675  519649 fix.go:108] recreateIfNeeded on no-preload-20220325020326-262786: state=Stopped err=<nil>
	W0325 02:16:35.829710  519649 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:16:30.919166  516439 api_server.go:165] Checking apiserver status ...
	I0325 02:16:30.919257  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:30.927996  516439 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:30.928016  516439 api_server.go:165] Checking apiserver status ...
	I0325 02:16:30.928054  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:30.936308  516439 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:30.936337  516439 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:16:30.936344  516439 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:16:30.936355  516439 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:16:30.936402  516439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:30.961816  516439 cri.go:87] found id: "e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8"
	I0325 02:16:30.961847  516439 cri.go:87] found id: "c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5"
	I0325 02:16:30.961853  516439 cri.go:87] found id: "a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966"
	I0325 02:16:30.961869  516439 cri.go:87] found id: "576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031"
	I0325 02:16:30.961874  516439 cri.go:87] found id: "016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca"
	I0325 02:16:30.961880  516439 cri.go:87] found id: "74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8"
	I0325 02:16:30.961885  516439 cri.go:87] found id: ""
	I0325 02:16:30.961891  516439 cri.go:232] Stopping containers: [e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8 c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5 a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966 576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031 016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca 74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8]
	I0325 02:16:30.961942  516439 ssh_runner.go:195] Run: which crictl
	I0325 02:16:30.965080  516439 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8 c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5 a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966 576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031 016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca 74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8
	I0325 02:16:30.990650  516439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:16:31.001312  516439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:16:31.009030  516439 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 25 02:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 25 02:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Mar 25 02:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Mar 25 02:15 /etc/kubernetes/scheduler.conf
	
	I0325 02:16:31.009104  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:16:31.016238  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:16:31.022869  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:16:31.029565  516439 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:31.029621  516439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:16:31.036474  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:16:31.043067  516439 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:31.043125  516439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:16:31.049642  516439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:31.056883  516439 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:31.056914  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.101487  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.789161  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.922185  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.984722  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:32.028325  516439 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:32.028393  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:32.537756  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:33.037616  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:33.537339  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:34.037634  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:34.537880  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.037295  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.538072  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.968327  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:37.968941  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:35.833187  519649 out.go:176] * Restarting existing docker container for "no-preload-20220325020326-262786" ...
	I0325 02:16:35.833270  519649 cli_runner.go:133] Run: docker start no-preload-20220325020326-262786
	I0325 02:16:36.223867  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:16:36.260748  519649 kic.go:420] container "no-preload-20220325020326-262786" state is running.
	I0325 02:16:36.261158  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:36.295907  519649 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:16:36.296110  519649 machine.go:88] provisioning docker machine ...
	I0325 02:16:36.296134  519649 ubuntu.go:169] provisioning hostname "no-preload-20220325020326-262786"
	I0325 02:16:36.296174  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:36.331323  519649 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:36.331546  519649 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49589 <nil> <nil>}
	I0325 02:16:36.331564  519649 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20220325020326-262786 && echo "no-preload-20220325020326-262786" | sudo tee /etc/hostname
	I0325 02:16:36.332175  519649 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50526->127.0.0.1:49589: read: connection reset by peer
	I0325 02:16:39.464533  519649 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20220325020326-262786
	
	I0325 02:16:39.464619  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:39.500131  519649 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:39.500311  519649 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49589 <nil> <nil>}
	I0325 02:16:39.500341  519649 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220325020326-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220325020326-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220325020326-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:16:39.619029  519649 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:16:39.619064  519649 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:16:39.619085  519649 ubuntu.go:177] setting up certificates
	I0325 02:16:39.619100  519649 provision.go:83] configureAuth start
	I0325 02:16:39.619161  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:39.653347  519649 provision.go:138] copyHostCerts
	I0325 02:16:39.653407  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:16:39.653421  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:16:39.653484  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:16:39.653581  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:16:39.653592  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:16:39.653616  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:16:39.653673  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:16:39.653687  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:16:39.653707  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:16:39.653765  519649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220325020326-262786 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220325020326-262786]
	I0325 02:16:39.955829  519649 provision.go:172] copyRemoteCerts
	I0325 02:16:39.955898  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:16:39.955933  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:39.989898  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.079856  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 02:16:40.099567  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:16:40.119824  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0325 02:16:40.140874  519649 provision.go:86] duration metric: configureAuth took 521.759605ms
	I0325 02:16:40.140906  519649 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:16:40.141163  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:40.141185  519649 machine.go:91] provisioned docker machine in 3.845060196s
	I0325 02:16:40.141193  519649 start.go:302] post-start starting for "no-preload-20220325020326-262786" (driver="docker")
	I0325 02:16:40.141201  519649 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:16:40.141260  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:16:40.141308  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.180699  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.271442  519649 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:16:40.274944  519649 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:16:40.275028  519649 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:16:40.275041  519649 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:16:40.275051  519649 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:16:40.275064  519649 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:16:40.275115  519649 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:16:40.275176  519649 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:16:40.275263  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:16:40.282729  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:40.301545  519649 start.go:305] post-start completed in 160.334219ms
	I0325 02:16:40.301629  519649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:16:40.301692  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.340243  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.427579  519649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:16:40.431311  519649 fix.go:57] fixHost completed within 4.636891748s
	I0325 02:16:40.431332  519649 start.go:81] releasing machines lock for "no-preload-20220325020326-262786", held for 4.636932836s
	I0325 02:16:40.431419  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:40.471929  519649 ssh_runner.go:195] Run: systemctl --version
	I0325 02:16:40.471972  519649 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:16:40.471994  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.472031  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:36.038098  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:36.537401  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:37.037404  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:37.537180  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:38.037556  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:38.099215  516439 api_server.go:71] duration metric: took 6.070889838s to wait for apiserver process to appear ...
	I0325 02:16:38.099286  516439 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:16:38.099301  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:38.099706  516439 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0325 02:16:38.600314  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:41.706206  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:16:41.706241  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:16:42.100667  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:42.105436  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:16:42.105478  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:16:42.599961  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:42.605081  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:16:42.605109  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:16:43.100711  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:43.105895  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0325 02:16:43.112809  516439 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:16:43.112833  516439 api_server.go:130] duration metric: took 5.013539931s to wait for apiserver health ...
	I0325 02:16:43.112846  516439 cni.go:93] Creating CNI manager for ""
	I0325 02:16:43.112855  516439 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:43.115000  516439 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:16:43.115081  516439 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:16:43.119112  516439 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:16:43.119136  516439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:16:43.132304  516439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:16:43.929421  516439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:16:43.937528  516439 system_pods.go:59] 9 kube-system pods found
	I0325 02:16:43.937572  516439 system_pods.go:61] "coredns-64897985d-p65tg" [e65563a2-916d-4e4f-9899-45abcf6e43e6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937583  516439 system_pods.go:61] "etcd-newest-cni-20220325021454-262786" [301b74c1-25bb-412c-8781-5b02da9c4093] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:16:43.937590  516439 system_pods.go:61] "kindnet-td766" [40872158-4184-4df2-ae83-e42d228b4223] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:16:43.937600  516439 system_pods.go:61] "kube-apiserver-newest-cni-20220325021454-262786" [d2e43879-332a-448a-97c5-1a2bea717597] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:16:43.937605  516439 system_pods.go:61] "kube-controller-manager-newest-cni-20220325021454-262786" [8af92ea2-d71d-4620-ac1c-594d1cf3cd2b] Running
	I0325 02:16:43.937612  516439 system_pods.go:61] "kube-proxy-fj7dd" [1af095d5-b04f-4be9-bd3b-e2c7a2b373b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:16:43.937621  516439 system_pods.go:61] "kube-scheduler-newest-cni-20220325021454-262786" [33a2b8ac-d72f-4399-971a-38f587c9994c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:16:43.937627  516439 system_pods.go:61] "metrics-server-b955d9d8-sbk6n" [80ba7292-f3cd-4e79-88b4-6e9f5d1e738e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937636  516439 system_pods.go:61] "storage-provisioner" [28ecf9b3-cf1c-495e-a39e-8fe37150d662] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937642  516439 system_pods.go:74] duration metric: took 8.196027ms to wait for pod list to return data ...
	I0325 02:16:43.937652  516439 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:16:43.940863  516439 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:16:43.940894  516439 node_conditions.go:123] node cpu capacity is 8
	I0325 02:16:43.940904  516439 node_conditions.go:105] duration metric: took 3.247281ms to run NodePressure ...
	I0325 02:16:43.940927  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:44.087258  516439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:16:44.094685  516439 ops.go:34] apiserver oom_adj: -16
	I0325 02:16:44.094721  516439 kubeadm.go:605] restartCluster took 16.202985802s
	I0325 02:16:44.094732  516439 kubeadm.go:393] StartCluster complete in 16.248550193s
	I0325 02:16:44.094758  516439 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:44.094885  516439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:44.096265  516439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:44.101456  516439 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220325021454-262786" rescaled to 1
	I0325 02:16:44.101529  516439 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:16:44.103443  516439 out.go:176] * Verifying Kubernetes components...
	I0325 02:16:44.103511  516439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:16:44.101558  516439 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0325 02:16:44.103612  516439 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103628  516439 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103636  516439 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.103642  516439 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:16:44.103644  516439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220325021454-262786"
	I0325 02:16:44.103659  516439 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103685  516439 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220325021454-262786"
	I0325 02:16:44.103693  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	W0325 02:16:44.103700  516439 addons.go:165] addon metrics-server should already be in state true
	I0325 02:16:44.103616  516439 addons.go:65] Setting dashboard=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103733  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:44.103732  516439 addons.go:153] Setting addon dashboard=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.103905  516439 addons.go:165] addon dashboard should already be in state true
	I0325 02:16:44.103988  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:44.104010  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.101542  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:16:44.104212  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.101745  516439 config.go:176] Loaded profile config "newest-cni-20220325021454-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:44.104241  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.104495  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.121208  516439 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:44.121280  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:44.155459  516439 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:16:44.155647  516439 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:16:44.155665  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:16:44.155751  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.162366  516439 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:16:44.163990  516439 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:16:44.161446  516439 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.164031  516439 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:16:44.164070  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:16:44.164081  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:16:44.164083  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:40.468819  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:42.968016  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:44.165737  516439 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:16:44.164138  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.165834  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:16:44.165852  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:16:44.164608  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.165907  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.192351  516439 api_server.go:71] duration metric: took 90.77915ms to wait for apiserver process to appear ...
	I0325 02:16:44.192383  516439 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:16:44.192398  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:44.192396  516439 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0325 02:16:44.198241  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0325 02:16:44.199343  516439 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:16:44.199364  516439 api_server.go:130] duration metric: took 6.9739ms to wait for apiserver health ...
	I0325 02:16:44.199376  516439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:16:44.203708  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.209623  516439 system_pods.go:59] 9 kube-system pods found
	I0325 02:16:44.209665  516439 system_pods.go:61] "coredns-64897985d-p65tg" [e65563a2-916d-4e4f-9899-45abcf6e43e6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209676  516439 system_pods.go:61] "etcd-newest-cni-20220325021454-262786" [301b74c1-25bb-412c-8781-5b02da9c4093] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:16:44.209686  516439 system_pods.go:61] "kindnet-td766" [40872158-4184-4df2-ae83-e42d228b4223] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:16:44.209706  516439 system_pods.go:61] "kube-apiserver-newest-cni-20220325021454-262786" [d2e43879-332a-448a-97c5-1a2bea717597] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:16:44.209719  516439 system_pods.go:61] "kube-controller-manager-newest-cni-20220325021454-262786" [8af92ea2-d71d-4620-ac1c-594d1cf3cd2b] Running
	I0325 02:16:44.209734  516439 system_pods.go:61] "kube-proxy-fj7dd" [1af095d5-b04f-4be9-bd3b-e2c7a2b373b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:16:44.209784  516439 system_pods.go:61] "kube-scheduler-newest-cni-20220325021454-262786" [33a2b8ac-d72f-4399-971a-38f587c9994c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:16:44.209802  516439 system_pods.go:61] "metrics-server-b955d9d8-sbk6n" [80ba7292-f3cd-4e79-88b4-6e9f5d1e738e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209812  516439 system_pods.go:61] "storage-provisioner" [28ecf9b3-cf1c-495e-a39e-8fe37150d662] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209818  516439 system_pods.go:74] duration metric: took 10.436764ms to wait for pod list to return data ...
	I0325 02:16:44.209858  516439 default_sa.go:34] waiting for default service account to be created ...
	I0325 02:16:44.215792  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.216231  516439 default_sa.go:45] found service account: "default"
	I0325 02:16:44.216319  516439 default_sa.go:55] duration metric: took 6.410246ms for default service account to be created ...
	I0325 02:16:44.216344  516439 kubeadm.go:548] duration metric: took 114.781757ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0325 02:16:44.216396  516439 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:16:44.219134  516439 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:16:44.219161  516439 node_conditions.go:123] node cpu capacity is 8
	I0325 02:16:44.219175  516439 node_conditions.go:105] duration metric: took 2.773273ms to run NodePressure ...
	I0325 02:16:44.219210  516439 start.go:213] waiting for startup goroutines ...
	I0325 02:16:44.221833  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.222359  516439 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:16:44.222381  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:16:44.222432  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.261798  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.319771  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:16:44.319803  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:16:44.319846  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:16:44.321101  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:16:44.321125  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:16:44.334351  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:16:44.334375  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:16:44.334647  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:16:44.334666  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:16:44.349057  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:16:44.349094  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:16:44.349070  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:16:44.349161  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:16:44.389276  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:16:44.392743  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:16:44.393530  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:16:44.393550  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:16:44.410521  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:16:44.410552  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:16:44.496572  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:16:44.496606  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:16:44.515360  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:16:44.515405  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:16:44.600692  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:16:44.600722  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:16:44.688604  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:16:44.688635  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:16:44.707599  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:16:44.889028  516439 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220325021454-262786"
	I0325 02:16:45.068498  516439 out.go:176] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0325 02:16:45.068530  516439 addons.go:417] enableAddons completed in 966.974309ms
	I0325 02:16:45.105519  516439 start.go:499] kubectl: 1.23.5, cluster: 1.23.4-rc.0 (minor skew: 0)
	I0325 02:16:45.107876  516439 out.go:176] * Done! kubectl is now configured to use "newest-cni-20220325021454-262786" cluster and "default" namespace by default
	I0325 02:16:40.514344  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.516013  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.624849  519649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:16:40.637160  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:16:40.647198  519649 docker.go:183] disabling docker service ...
	I0325 02:16:40.647293  519649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:16:40.657506  519649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:16:40.667205  519649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:16:40.756526  519649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:16:40.838425  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:16:40.849201  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:16:40.862764  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:16:40.877296  519649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:16:40.884604  519649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:16:40.891942  519649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:16:40.968097  519649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:16:41.042195  519649 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:16:41.042340  519649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:16:41.046206  519649 start.go:462] Will wait 60s for crictl version
	I0325 02:16:41.046277  519649 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:41.069914  519649 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:16:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:16:44.968453  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:47.468552  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.117787  519649 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:52.144102  519649 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:16:52.144170  519649 ssh_runner.go:195] Run: containerd --version
	I0325 02:16:52.168021  519649 ssh_runner.go:195] Run: containerd --version
	I0325 02:16:52.192255  519649 out.go:176] * Preparing Kubernetes v1.23.4-rc.0 on containerd 1.5.10 ...
	I0325 02:16:52.192348  519649 cli_runner.go:133] Run: docker network inspect no-preload-20220325020326-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:16:52.228171  519649 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0325 02:16:52.231817  519649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:16:49.968284  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.467868  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:54.468272  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.244329  519649 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:16:52.244416  519649 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:52.244468  519649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:16:52.271321  519649 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:16:52.271344  519649 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:16:52.271385  519649 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:16:52.298329  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:16:52.298360  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:52.298373  519649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:16:52.298389  519649 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220325020326-262786 NodeName:no-preload-20220325020326-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:16:52.298577  519649 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220325020326-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:16:52.298682  519649 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220325020326-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 02:16:52.298747  519649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4-rc.0
	I0325 02:16:52.306846  519649 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:16:52.306918  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:16:52.315084  519649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 02:16:52.328704  519649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0325 02:16:52.342299  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0325 02:16:52.355577  519649 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:16:52.358463  519649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:16:52.367826  519649 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786 for IP: 192.168.67.2
	I0325 02:16:52.367934  519649 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:16:52.367989  519649 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:16:52.368051  519649 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.key
	I0325 02:16:52.368101  519649 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key.c7fa3a9e
	I0325 02:16:52.368132  519649 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key
	I0325 02:16:52.368232  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:16:52.368263  519649 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:16:52.368275  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:16:52.368299  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:16:52.368335  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:16:52.368357  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:16:52.368397  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:52.368977  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:16:52.386350  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 02:16:52.404078  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:16:52.422535  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:16:52.441293  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:16:52.458689  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:16:52.476708  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:16:52.494410  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:16:52.511769  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:16:52.529287  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:16:52.546092  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:16:52.562842  519649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:16:52.574641  519649 ssh_runner.go:195] Run: openssl version
	I0325 02:16:52.579369  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:16:52.586915  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.590088  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.590144  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.595082  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:16:52.601804  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:16:52.608863  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.611860  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.611906  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.616573  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:16:52.622899  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:16:52.629919  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.632815  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.632859  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.637417  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:16:52.644239  519649 kubeadm.go:391] StartCluster: {Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:52.644354  519649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:16:52.644394  519649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:52.669210  519649 cri.go:87] found id: "e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	I0325 02:16:52.669242  519649 cri.go:87] found id: "0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc"
	I0325 02:16:52.669249  519649 cri.go:87] found id: "ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252"
	I0325 02:16:52.669254  519649 cri.go:87] found id: "fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b"
	I0325 02:16:52.669270  519649 cri.go:87] found id: "e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710"
	I0325 02:16:52.669279  519649 cri.go:87] found id: "b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244"
	I0325 02:16:52.669283  519649 cri.go:87] found id: ""
	I0325 02:16:52.669324  519649 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:16:52.683722  519649 cri.go:114] JSON = null
	W0325 02:16:52.683785  519649 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0325 02:16:52.683838  519649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:16:52.690850  519649 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:16:52.690872  519649 kubeadm.go:601] restartCluster start
	I0325 02:16:52.690912  519649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:16:52.697516  519649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:52.698228  519649 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220325020326-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:52.698600  519649 kubeconfig.go:127] "no-preload-20220325020326-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:16:52.699273  519649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:52.700696  519649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:16:52.707667  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:52.707717  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:52.715666  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:52.916102  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:52.916184  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:52.925481  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.116769  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.116855  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.125381  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.316671  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.316772  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.325189  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.516483  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.516581  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.525793  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.716104  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.716183  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.724648  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.915849  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.915940  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.924616  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.115776  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.115861  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.124538  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.316714  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.316801  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.325601  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.515836  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.515913  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.524158  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.716463  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.716549  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.725607  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.915823  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.915903  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.924487  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.116802  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.116901  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.126160  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.316446  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.316526  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.324891  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:56.468419  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:58.968213  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:55.516554  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.516656  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.525265  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.716429  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.716509  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.725617  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.725645  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.725683  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.733139  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.733164  519649 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:16:55.733174  519649 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:16:55.733193  519649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:16:55.733247  519649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:55.758794  519649 cri.go:87] found id: "e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	I0325 02:16:55.758826  519649 cri.go:87] found id: "0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc"
	I0325 02:16:55.758835  519649 cri.go:87] found id: "ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252"
	I0325 02:16:55.758843  519649 cri.go:87] found id: "fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b"
	I0325 02:16:55.758852  519649 cri.go:87] found id: "e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710"
	I0325 02:16:55.758860  519649 cri.go:87] found id: "b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244"
	I0325 02:16:55.758867  519649 cri.go:87] found id: ""
	I0325 02:16:55.758874  519649 cri.go:232] Stopping containers: [e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252 fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710 b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244]
	I0325 02:16:55.758928  519649 ssh_runner.go:195] Run: which crictl
	I0325 02:16:55.762024  519649 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252 fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710 b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244
	I0325 02:16:55.786603  519649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:16:55.796385  519649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:16:55.803085  519649 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 25 02:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Mar 25 02:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:03 /etc/kubernetes/scheduler.conf
	
	I0325 02:16:55.803151  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:16:55.809939  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:16:55.816507  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:16:55.822744  519649 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.822807  519649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:16:55.828985  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:16:55.835918  519649 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.835967  519649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:16:55.843105  519649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:55.850384  519649 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:55.850419  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:55.893825  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.667540  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.802771  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.854899  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.922247  519649 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:56.922327  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:57.431777  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:57.932218  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:58.431927  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:58.931629  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:59.432174  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:59.932237  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:00.431697  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:00.968915  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:03.468075  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:00.932213  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:01.431617  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:01.931744  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.431861  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.931562  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.996665  519649 api_server.go:71] duration metric: took 6.074430006s to wait for apiserver process to appear ...
	I0325 02:17:02.996706  519649 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:17:02.996721  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:02.997178  519649 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0325 02:17:03.497954  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:06.096426  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:17:06.096466  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:17:06.497872  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:06.502718  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:17:06.502746  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:17:06.998348  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:07.002908  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:17:07.002934  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:17:07.497481  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:07.502551  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0325 02:17:07.508747  519649 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:17:07.508776  519649 api_server.go:130] duration metric: took 4.512062997s to wait for apiserver health ...
	I0325 02:17:07.508793  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:17:07.508800  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:17:05.468506  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:07.968498  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:07.511699  519649 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:17:07.511795  519649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:17:07.515865  519649 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:17:07.515896  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:17:07.530511  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:17:08.432775  519649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:17:08.439909  519649 system_pods.go:59] 9 kube-system pods found
	I0325 02:17:08.439946  519649 system_pods.go:61] "coredns-64897985d-b9827" [29b80e2f-89fe-4b4a-a931-333a59535d4c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.439962  519649 system_pods.go:61] "etcd-no-preload-20220325020326-262786" [add71311-f324-4612-b981-ca42b0ef813c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:17:08.439971  519649 system_pods.go:61] "kindnet-nhlsm" [57939cf7-016c-486a-8a08-466ff1515c1f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:17:08.439977  519649 system_pods.go:61] "kube-apiserver-no-preload-20220325020326-262786" [f9b1f749-8d63-446e-bd36-152e849a5bf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:17:08.439990  519649 system_pods.go:61] "kube-controller-manager-no-preload-20220325020326-262786" [a229a2c1-6ed0-434a-8b3c-7951beee3fe0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:17:08.439994  519649 system_pods.go:61] "kube-proxy-l6tg2" [f41c6b8d-0d57-4096-af80-8e9a7da29b60] Running
	I0325 02:17:08.440003  519649 system_pods.go:61] "kube-scheduler-no-preload-20220325020326-262786" [a41de5aa-8f3c-46cd-bc8e-85c035c31512] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:17:08.440012  519649 system_pods.go:61] "metrics-server-b955d9d8-dzczk" [5c06ad70-f575-44ee-8a14-d4d2b172ccf2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.440019  519649 system_pods.go:61] "storage-provisioner" [d778a38b-7ebf-4a50-956a-6628a9055852] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.440027  519649 system_pods.go:74] duration metric: took 7.223437ms to wait for pod list to return data ...
	I0325 02:17:08.440037  519649 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:17:08.443080  519649 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:17:08.443104  519649 node_conditions.go:123] node cpu capacity is 8
	I0325 02:17:08.443116  519649 node_conditions.go:105] duration metric: took 3.071905ms to run NodePressure ...
	I0325 02:17:08.443134  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:17:08.590505  519649 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:17:08.611319  519649 kubeadm.go:752] kubelet initialised
	I0325 02:17:08.611346  519649 kubeadm.go:753] duration metric: took 20.794737ms waiting for restarted kubelet to initialise ...
	I0325 02:17:08.611354  519649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:17:08.617229  519649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" ...
	I0325 02:17:09.968693  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:12.468173  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:10.623188  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:13.123899  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:14.968191  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:17.468172  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:15.623504  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:18.123637  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:19.968292  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:22.468166  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:20.623486  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:22.624740  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:24.625363  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:24.968021  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:27.468041  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:29.468565  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:27.123366  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:29.123949  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:31.968178  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:33.968823  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:31.623836  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:34.123164  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:36.468695  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:38.967993  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:36.123971  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:38.623418  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:41.468821  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:43.968154  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:40.623650  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:43.124505  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:45.968404  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:47.968532  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:45.624087  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:48.123363  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:50.468244  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:52.468797  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:54.468960  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:50.623592  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:52.624829  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:55.124055  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:56.968701  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:59.467918  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:57.623248  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:59.623684  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:01.468256  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:03.967939  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:01.623899  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:04.123560  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:05.968665  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:08.467884  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:06.124019  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:08.623070  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:10.468279  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:12.468416  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:11.123374  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:13.623289  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:14.967919  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:16.968150  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:19.468065  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:15.623672  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:18.124412  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:21.468475  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:23.968850  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:20.624197  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:23.123807  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:25.124272  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:26.468766  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:28.968612  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:27.624274  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:30.123559  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:30.968779  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:33.468295  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:32.623099  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:34.623275  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:35.468741  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:37.968661  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:36.623368  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:38.623990  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:40.468313  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:42.468818  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:40.624162  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:43.123758  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:44.968325  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:47.468369  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:45.623667  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:47.623731  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:50.123654  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:49.968304  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:51.968856  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:54.468654  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:52.623485  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:54.623818  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:56.968573  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:58.968977  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:57.123496  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:59.124157  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:01.470174  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:03.968282  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:01.623917  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:04.123410  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:05.968412  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:07.968843  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:06.124235  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:08.124325  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:10.467818  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:12.468731  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:10.623795  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:13.123199  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:15.124279  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:14.967929  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:16.968185  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:18.968894  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:17.623867  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:20.124329  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:21.468097  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:23.468504  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:22.622920  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:24.623325  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:25.968086  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:28.467817  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:26.623622  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:29.123797  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:30.467966  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:32.967797  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:34.470135  496534 node_ready.go:38] duration metric: took 4m0.008592307s waiting for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 02:19:34.472535  496534 out.go:176] 
	W0325 02:19:34.472693  496534 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:19:34.472714  496534 out.go:241] * 
	W0325 02:19:34.473654  496534 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f6cf87321d1b5       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   31b95e88dc2af
	0f0c7b7b9b87a       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   4dfb05edc119d
	a0112f59e2b6f       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   31b95e88dc2af
	85b04b6171d3e       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   711c9a59158f6
	b5876f14d59d1       b2756210eeabf       4 minutes ago        Running             etcd                      0                   5baefcef4d5b1
	3e153bc9be8e3       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   dd77722cd3ddb
	df11628d76654       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   0881b8d78c2d7
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:09:40 UTC, end at Fri 2022-03-25 02:19:35 UTC. --
	Mar 25 02:15:10 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:10.592310537Z" level=info msg="StartContainer for \"b5876f14d59d16a3864556cd668573e5ab98ce4ada95bce268d655a2dedf6463\" returns successfully"
	Mar 25 02:15:10 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:10.596923952Z" level=info msg="StartContainer for \"85b04b6171d3e514ba8fe84e6ca60fc75eb1ff0ea1fc607a2b792d171f02a5a0\" returns successfully"
	Mar 25 02:15:33 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:33.822868674Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.024500178Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-vb8zw,Uid:444e4f5d-2509-464b-ad2a-252f5a8b7ff2,Namespace:kube-system,Attempt:0,}"
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.041328690Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4 pid=3975
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.085411584Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-w2fhc,Uid:efbca989-bc77-4b08-8674-ba173887b1c3,Namespace:kube-system,Attempt:0,}"
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.102544415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-vb8zw,Uid:444e4f5d-2509-464b-ad2a-252f5a8b7ff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\""
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.102854749Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4dfb05edc119d7251345db4d255f65215e34f24f95d913b410dfa4eda3bda3ed pid=4016
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.108096640Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.125690582Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"a0112f59e2b6fd103d27456f7718a8ab098cd926beeedc18ef55292467ac828d\""
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.126258307Z" level=info msg="StartContainer for \"a0112f59e2b6fd103d27456f7718a8ab098cd926beeedc18ef55292467ac828d\""
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.175896583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2fhc,Uid:efbca989-bc77-4b08-8674-ba173887b1c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dfb05edc119d7251345db4d255f65215e34f24f95d913b410dfa4eda3bda3ed\""
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.178746341Z" level=info msg="CreateContainer within sandbox \"4dfb05edc119d7251345db4d255f65215e34f24f95d913b410dfa4eda3bda3ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.199091149Z" level=info msg="CreateContainer within sandbox \"4dfb05edc119d7251345db4d255f65215e34f24f95d913b410dfa4eda3bda3ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f0c7b7b9b87a3fb5fb66ad36820e076e8a71d97b4e72f8850b0f7664c56e904\""
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.199662951Z" level=info msg="StartContainer for \"0f0c7b7b9b87a3fb5fb66ad36820e076e8a71d97b4e72f8850b0f7664c56e904\""
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.269436627Z" level=info msg="StartContainer for \"0f0c7b7b9b87a3fb5fb66ad36820e076e8a71d97b4e72f8850b0f7664c56e904\" returns successfully"
	Mar 25 02:15:34 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:15:34.388779843Z" level=info msg="StartContainer for \"a0112f59e2b6fd103d27456f7718a8ab098cd926beeedc18ef55292467ac828d\" returns successfully"
	Mar 25 02:18:14 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:18:14.717916345Z" level=info msg="shim disconnected" id=a0112f59e2b6fd103d27456f7718a8ab098cd926beeedc18ef55292467ac828d
	Mar 25 02:18:14 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:18:14.717972829Z" level=warning msg="cleaning up after shim disconnected" id=a0112f59e2b6fd103d27456f7718a8ab098cd926beeedc18ef55292467ac828d namespace=k8s.io
	Mar 25 02:18:14 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:18:14.717986390Z" level=info msg="cleaning up dead shim"
	Mar 25 02:18:14 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:18:14.728409589Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:18:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4798\n"
	Mar 25 02:18:15 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:18:15.683277220Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Mar 25 02:18:15 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:18:15.696763444Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"f6cf87321d1b58562784760c34ba7a202790363bad9de268defd07f3272c67f8\""
	Mar 25 02:18:15 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:18:15.697237467Z" level=info msg="StartContainer for \"f6cf87321d1b58562784760c34ba7a202790363bad9de268defd07f3272c67f8\""
	Mar 25 02:18:15 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:18:15.888081682Z" level=info msg="StartContainer for \"f6cf87321d1b58562784760c34ba7a202790363bad9de268defd07f3272c67f8\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220325015306-262786
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220325015306-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=old-k8s-version-20220325015306-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_15_19_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:15:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:19:14 +0000   Fri, 25 Mar 2022 02:15:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:19:14 +0000   Fri, 25 Mar 2022 02:15:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:19:14 +0000   Fri, 25 Mar 2022 02:15:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:19:14 +0000   Fri, 25 Mar 2022 02:15:10 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220325015306-262786
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                586019ba-8c2c-445d-9550-f545f1f4ef4d
	 Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	 Kernel Version:             5.13.0-1021-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220325015306-262786                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                kindnet-vb8zw                                                    100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                kube-apiserver-old-k8s-version-20220325015306-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kube-system                kube-controller-manager-old-k8s-version-20220325015306-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                kube-proxy-w2fhc                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                kube-scheduler-old-k8s-version-20220325015306-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                               Message
	  ----    ------                   ----                   ----                                               -------
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                   kube-proxy, old-k8s-version-20220325015306-262786  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [b5876f14d59d16a3864556cd668573e5ab98ce4ada95bce268d655a2dedf6463] <==
	* 2022-03-25 02:15:10.599416 I | etcdserver: initial cluster = old-k8s-version-20220325015306-262786=https://192.168.76.2:2380
	2022-03-25 02:15:10.602812 I | etcdserver: starting member ea7e25599daad906 in cluster 6f20f2c4b2fb5f8a
	2022-03-25 02:15:10.602847 I | raft: ea7e25599daad906 became follower at term 0
	2022-03-25 02:15:10.602853 I | raft: newRaft ea7e25599daad906 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-03-25 02:15:10.602857 I | raft: ea7e25599daad906 became follower at term 1
	2022-03-25 02:15:10.609968 W | auth: simple token is not cryptographically signed
	2022-03-25 02:15:10.612943 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-03-25 02:15:10.613254 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-03-25 02:15:10.613542 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2022-03-25 02:15:10.615595 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-03-25 02:15:10.615734 I | embed: listening for metrics on http://192.168.76.2:2381
	2022-03-25 02:15:10.615857 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-03-25 02:15:11.103195 I | raft: ea7e25599daad906 is starting a new election at term 1
	2022-03-25 02:15:11.103233 I | raft: ea7e25599daad906 became candidate at term 2
	2022-03-25 02:15:11.103262 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2022-03-25 02:15:11.103284 I | raft: ea7e25599daad906 became leader at term 2
	2022-03-25 02:15:11.103292 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2022-03-25 02:15:11.103528 I | etcdserver: setting up the initial cluster version to 3.3
	2022-03-25 02:15:11.104488 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-03-25 02:15:11.104548 I | etcdserver/api: enabled capabilities for version 3.3
	2022-03-25 02:15:11.104577 I | etcdserver: published {Name:old-k8s-version-20220325015306-262786 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2022-03-25 02:15:11.104644 I | embed: ready to serve client requests
	2022-03-25 02:15:11.104663 I | embed: ready to serve client requests
	2022-03-25 02:15:11.107618 I | embed: serving client requests on 192.168.76.2:2379
	2022-03-25 02:15:11.108050 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  02:19:35 up  4:57,  0 users,  load average: 0.39, 0.86, 1.24
	Linux old-k8s-version-20220325015306-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3e153bc9be8e3a8f1bd845e03b812a61f58707416dc259c62fd7639162e71b2e] <==
	* I0325 02:15:17.032869       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 02:15:17.312936       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0325 02:15:17.618456       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0325 02:15:17.619241       1 controller.go:606] quota admission added evaluator for: endpoints
	I0325 02:15:18.499789       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0325 02:15:19.010898       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0325 02:15:19.367989       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0325 02:15:33.651928       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:15:33.677349       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0325 02:15:33.687828       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0325 02:15:37.366545       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0325 02:15:37.366661       1 handler_proxy.go:99] no RequestInfo found in the context
	E0325 02:15:37.366777       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:15:37.366794       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0325 02:16:37.367032       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0325 02:16:37.367110       1 handler_proxy.go:99] no RequestInfo found in the context
	E0325 02:16:37.367168       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:16:37.367184       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0325 02:18:37.367424       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0325 02:18:37.367508       1 handler_proxy.go:99] no RequestInfo found in the context
	E0325 02:18:37.367574       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:18:37.367588       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [85b04b6171d3e514ba8fe84e6ca60fc75eb1ff0ea1fc607a2b792d171f02a5a0] <==
	* I0325 02:15:35.898517       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"8204c683-6274-499a-a5ac-e00a9c35fe0e", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0325 02:15:35.901085       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0325 02:15:35.903936       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:15:35.903925       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"2c25aa0f-4ec3-43a5-9335-5f560a6f9360", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0325 02:15:35.907716       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:15:35.907712       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"8204c683-6274-499a-a5ac-e00a9c35fe0e", APIVersion:"apps/v1", ResourceVersion:"436", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0325 02:15:35.911679       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:15:35.911727       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"8204c683-6274-499a-a5ac-e00a9c35fe0e", APIVersion:"apps/v1", ResourceVersion:"436", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:15:36.411073       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-6f89b5864b", UID:"d225d678-1116-487e-a190-de9ac98c68f7", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-6f89b5864b-w7k4b
	I0325 02:15:36.924728       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"8204c683-6274-499a-a5ac-e00a9c35fe0e", APIVersion:"apps/v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-766959b846-m7r5f
	I0325 02:15:36.929285       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"2c25aa0f-4ec3-43a5-9335-5f560a6f9360", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-7n44j
	E0325 02:16:04.457941       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:16:06.206297       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:16:34.709481       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:16:38.208215       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:17:04.961286       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:17:10.209906       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:17:35.212989       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:17:42.211500       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:18:05.464633       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:18:14.213253       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:18:35.716036       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:18:46.215206       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:19:05.967605       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:19:18.216687       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [0f0c7b7b9b87a3fb5fb66ad36820e076e8a71d97b4e72f8850b0f7664c56e904] <==
	* W0325 02:15:34.314982       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0325 02:15:34.321515       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0325 02:15:34.321546       1 server_others.go:149] Using iptables Proxier.
	I0325 02:15:34.321956       1 server.go:529] Version: v1.16.0
	I0325 02:15:34.322406       1 config.go:131] Starting endpoints config controller
	I0325 02:15:34.322437       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0325 02:15:34.322518       1 config.go:313] Starting service config controller
	I0325 02:15:34.322542       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0325 02:15:34.422634       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0325 02:15:34.422699       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [df11628d7665411966cbbb2ac185c87a07097d8b6f2ce2aac3800860bfd82f72] <==
	* I0325 02:15:14.298832       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0325 02:15:14.392493       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:15:14.399607       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:15:14.399693       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 02:15:14.399755       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:15:14.399842       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:15:14.399961       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:15:14.400414       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:15:14.400441       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:15:14.400427       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:15:14.402696       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:15:14.404598       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 02:15:15.393856       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:15:15.400840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:15:15.402274       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 02:15:15.403210       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:15:15.404458       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:15:15.405563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:15:15.406904       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:15:15.408069       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:15:15.409089       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:15:15.410312       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:15:15.411358       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 02:15:36.416025       1 factory.go:585] pod is already present in the activeQ
	E0325 02:15:36.936626       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:09:40 UTC, end at Fri 2022-03-25 02:19:36 UTC. --
	Mar 25 02:17:34 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:17:34.508539    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:17:39 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:17:39.509268    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:17:44 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:17:44.509878    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:17:49 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:17:49.510633    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:17:54 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:17:54.511429    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:17:59 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:17:59.512052    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:04 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:04.512811    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:09 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:09.513473    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:14 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:14.514158    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:19 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:19.514829    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:24 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:24.515651    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:29 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:29.516399    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:34 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:34.517080    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:39 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:39.517786    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:44 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:44.518558    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:49 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:49.519247    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:54 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:54.519945    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:18:59 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:18:59.520631    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:19:04 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:19:04.521324    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:19:09 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:19:09.522077    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:19:14 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:19:14.522854    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:19:19 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:19:19.523634    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:19:24 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:19:24.524298    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:19:29 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:19:29.525146    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:19:34 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:19:34.526155    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-5644d7b6d9-sv5bc metrics-server-6f89b5864b-w7k4b storage-provisioner dashboard-metrics-scraper-6b84985989-7n44j kubernetes-dashboard-766959b846-m7r5f
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-sv5bc metrics-server-6f89b5864b-w7k4b storage-provisioner dashboard-metrics-scraper-6b84985989-7n44j kubernetes-dashboard-766959b846-m7r5f
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-sv5bc metrics-server-6f89b5864b-w7k4b storage-provisioner dashboard-metrics-scraper-6b84985989-7n44j kubernetes-dashboard-766959b846-m7r5f: exit status 1 (67.855912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-sv5bc" not found
	Error from server (NotFound): pods "metrics-server-6f89b5864b-w7k4b" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-7n44j" not found
	Error from server (NotFound): pods "kubernetes-dashboard-766959b846-m7r5f" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-sv5bc metrics-server-6f89b5864b-w7k4b storage-provisioner dashboard-metrics-scraper-6b84985989-7n44j kubernetes-dashboard-766959b846-m7r5f: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (597.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (297.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220325020956-262786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3
E0325 02:11:00.415210  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 02:11:05.105628  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 02:11:12.032791  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:11:39.718360  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:37.498359  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:37.503655  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:37.513982  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:37.534218  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:37.574506  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:37.655533  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:37.816149  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:38.136749  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:38.777824  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:40.058019  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:42.618166  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:47.738865  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:12:57.979695  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:18.460674  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:47.791241  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 02:13:56.093927  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 02:13:57.673983  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:57.679223  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:57.689444  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:57.709731  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:57.750018  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:57.830394  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:57.990803  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:58.311551  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:58.952184  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:13:59.421473  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:14:00.232346  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:14:02.792504  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:14:07.913656  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:14:18.154865  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220325020956-262786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3: exit status 80 (4m55.150163525s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 02:09:57.047407  499754 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:09:57.047555  499754 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:57.047569  499754 out.go:310] Setting ErrFile to fd 2...
	I0325 02:09:57.047574  499754 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:57.047691  499754 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:09:57.047991  499754 out.go:304] Setting JSON to false
	I0325 02:09:57.049296  499754 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17269,"bootTime":1648156928,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:09:57.049376  499754 start.go:125] virtualization: kvm guest
	I0325 02:09:57.052171  499754 out.go:176] * [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:09:57.053840  499754 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:09:57.052363  499754 notify.go:193] Checking for updates...
	I0325 02:09:57.055543  499754 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:09:57.057049  499754 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:09:57.058584  499754 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:09:57.059988  499754 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:09:57.061783  499754 config.go:176] Loaded profile config "embed-certs-20220325020743-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:09:57.061898  499754 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:09:57.061977  499754 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 02:09:57.062034  499754 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:09:57.105819  499754 docker.go:136] docker version: linux-20.10.14
	I0325 02:09:57.105933  499754 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:57.205697  499754 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:57.137437369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:09:57.205810  499754 docker.go:253] overlay module found
	I0325 02:09:57.208293  499754 out.go:176] * Using the docker driver based on user configuration
	I0325 02:09:57.208333  499754 start.go:284] selected driver: docker
	I0325 02:09:57.208341  499754 start.go:801] validating driver "docker" against <nil>
	I0325 02:09:57.208362  499754 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:09:57.208430  499754 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:09:57.208452  499754 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 02:09:57.210227  499754 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:09:57.210929  499754 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:57.337156  499754 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:57.251383592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:09:57.337361  499754 start_flags.go:290] no existing cluster config was found, will generate one from the flags 
	I0325 02:09:57.337580  499754 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:09:57.337610  499754 cni.go:93] Creating CNI manager for ""
	I0325 02:09:57.337622  499754 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:09:57.337636  499754 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:09:57.337643  499754 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:09:57.337652  499754 start_flags.go:299] Found "CNI" CNI - setting NetworkPlugin=cni
	I0325 02:09:57.337665  499754 start_flags.go:304] config:
	{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:57.340184  499754 out.go:176] * Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	I0325 02:09:57.340226  499754 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:09:57.341673  499754 out.go:176] * Pulling base image ...
	I0325 02:09:57.341695  499754 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:09:57.341729  499754 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:09:57.341751  499754 cache.go:57] Caching tarball of preloaded images
	I0325 02:09:57.341797  499754 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:09:57.341986  499754 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:09:57.342010  499754 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:09:57.342169  499754 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:09:57.342200  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json: {Name:mk6616c41d8f156711ece89bc5f7c3ef7182991a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:09:57.377860  499754 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:09:57.377891  499754 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:09:57.377908  499754 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:09:57.377958  499754 start.go:348] acquiring machines lock for default-k8s-different-port-20220325020956-262786: {Name:mk1740da455fcceda9a6f7400776a3a68790d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:09:57.378118  499754 start.go:352] acquired machines lock for "default-k8s-different-port-20220325020956-262786" in 134.875µs
	I0325 02:09:57.378156  499754 start.go:90] Provisioning new machine with config: &{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-202203250209
56-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:09:57.378297  499754 start.go:127] createHost starting for "" (driver="docker")
	I0325 02:09:57.381757  499754 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0325 02:09:57.382033  499754 start.go:161] libmachine.API.Create for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:09:57.382073  499754 client.go:168] LocalClient.Create starting
	I0325 02:09:57.382153  499754 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 02:09:57.382199  499754 main.go:130] libmachine: Decoding PEM data...
	I0325 02:09:57.382233  499754 main.go:130] libmachine: Parsing certificate...
	I0325 02:09:57.382304  499754 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 02:09:57.382333  499754 main.go:130] libmachine: Decoding PEM data...
	I0325 02:09:57.382352  499754 main.go:130] libmachine: Parsing certificate...
	I0325 02:09:57.382749  499754 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 02:09:57.418938  499754 cli_runner.go:180] docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 02:09:57.419018  499754 network_create.go:254] running [docker network inspect default-k8s-different-port-20220325020956-262786] to gather additional debugging logs...
	I0325 02:09:57.419039  499754 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786
	W0325 02:09:57.455464  499754 cli_runner.go:180] docker network inspect default-k8s-different-port-20220325020956-262786 returned with exit code 1
	I0325 02:09:57.455496  499754 network_create.go:257] error running [docker network inspect default-k8s-different-port-20220325020956-262786]: docker network inspect default-k8s-different-port-20220325020956-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220325020956-262786
	I0325 02:09:57.455540  499754 network_create.go:259] output of [docker network inspect default-k8s-different-port-20220325020956-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220325020956-262786
	
	** /stderr **
	I0325 02:09:57.455625  499754 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:09:57.493122  499754 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000384608] misses:0}
	I0325 02:09:57.493180  499754 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 02:09:57.493196  499754 network_create.go:106] attempt to create docker network default-k8s-different-port-20220325020956-262786 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0325 02:09:57.493244  499754 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220325020956-262786
	I0325 02:09:57.569900  499754 network_create.go:90] docker network default-k8s-different-port-20220325020956-262786 192.168.49.0/24 created
	I0325 02:09:57.569948  499754 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20220325020956-262786" container
	I0325 02:09:57.570021  499754 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 02:09:57.608487  499754 cli_runner.go:133] Run: docker volume create default-k8s-different-port-20220325020956-262786 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220325020956-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 02:09:57.643256  499754 oci.go:102] Successfully created a docker volume default-k8s-different-port-20220325020956-262786
	I0325 02:09:57.643337  499754 cli_runner.go:133] Run: docker run --rm --name default-k8s-different-port-20220325020956-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220325020956-262786 --entrypoint /usr/bin/test -v default-k8s-different-port-20220325020956-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 02:09:58.217460  499754 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20220325020956-262786
	I0325 02:09:58.217533  499754 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:09:58.217575  499754 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 02:09:58.217660  499754 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220325020956-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 02:10:07.677687  499754 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220325020956-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (9.45997079s)
	I0325 02:10:07.677755  499754 kic.go:188] duration metric: took 9.460175 seconds to extract preloaded images to volume
	W0325 02:10:07.677813  499754 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 02:10:07.677832  499754 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 02:10:07.677944  499754 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 02:10:07.782903  499754 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220325020956-262786 --name default-k8s-different-port-20220325020956-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220325020956-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220325020956-262786 --network default-k8s-different-port-20220325020956-262786 --ip 192.168.49.2 --volume default-k8s-different-port-20220325020956-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9
bd7bdca30a55218347b5
	I0325 02:10:08.218357  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Running}}
	I0325 02:10:08.261389  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:08.303094  499754 cli_runner.go:133] Run: docker exec default-k8s-different-port-20220325020956-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 02:10:08.376219  499754 oci.go:281] the created container "default-k8s-different-port-20220325020956-262786" has a running status.
	I0325 02:10:08.376256  499754 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa...
	I0325 02:10:08.589639  499754 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 02:10:08.698073  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:08.736950  499754 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 02:10:08.736977  499754 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220325020956-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 02:10:08.829176  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:08.866417  499754 machine.go:88] provisioning docker machine ...
	I0325 02:10:08.866468  499754 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220325020956-262786"
	I0325 02:10:08.866534  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:08.903231  499754 main.go:130] libmachine: Using SSH client type: native
	I0325 02:10:08.903464  499754 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49574 <nil> <nil>}
	I0325 02:10:08.903490  499754 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220325020956-262786 && echo "default-k8s-different-port-20220325020956-262786" | sudo tee /etc/hostname
	I0325 02:10:09.040143  499754 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220325020956-262786
	
	I0325 02:10:09.040223  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.075006  499754 main.go:130] libmachine: Using SSH client type: native
	I0325 02:10:09.075167  499754 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49574 <nil> <nil>}
	I0325 02:10:09.075194  499754 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220325020956-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220325020956-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220325020956-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:10:09.198799  499754 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:10:09.198848  499754 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:10:09.198872  499754 ubuntu.go:177] setting up certificates
	I0325 02:10:09.198884  499754 provision.go:83] configureAuth start
	I0325 02:10:09.198944  499754 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.231874  499754 provision.go:138] copyHostCerts
	I0325 02:10:09.231935  499754 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:10:09.231946  499754 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:10:09.232012  499754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:10:09.232111  499754 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:10:09.232127  499754 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:10:09.232160  499754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:10:09.232229  499754 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:10:09.232243  499754 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:10:09.232276  499754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:10:09.232335  499754 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220325020956-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220325020956-262786]
	I0325 02:10:09.484166  499754 provision.go:172] copyRemoteCerts
	I0325 02:10:09.484235  499754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:10:09.484295  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.519681  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:09.610194  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:10:09.627463  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0325 02:10:09.644929  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:10:09.662266  499754 provision.go:86] duration metric: configureAuth took 463.370434ms
	I0325 02:10:09.662291  499754 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:10:09.662460  499754 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:10:09.662482  499754 machine.go:91] provisioned docker machine in 796.03566ms
	I0325 02:10:09.662488  499754 client.go:171] LocalClient.Create took 12.280404225s
	I0325 02:10:09.662507  499754 start.go:169] duration metric: libmachine.API.Create for "default-k8s-different-port-20220325020956-262786" took 12.280475448s
	I0325 02:10:09.662521  499754 start.go:302] post-start starting for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:10:09.662528  499754 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:10:09.662596  499754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:10:09.662644  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.695792  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:09.782770  499754 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:10:09.785512  499754 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:10:09.785540  499754 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:10:09.785555  499754 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:10:09.785565  499754 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:10:09.785581  499754 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:10:09.785640  499754 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:10:09.785745  499754 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:10:09.785860  499754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:10:09.792604  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:10:09.810666  499754 start.go:305] post-start completed in 148.129279ms
	I0325 02:10:09.811078  499754 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.843910  499754 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:10:09.844135  499754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:10:09.844175  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.875677  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:09.959378  499754 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:10:09.963145  499754 start.go:130] duration metric: createHost completed in 12.584833183s
	I0325 02:10:09.963173  499754 start.go:81] releasing machines lock for "default-k8s-different-port-20220325020956-262786", held for 12.58503599s
	I0325 02:10:09.963258  499754 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.996631  499754 ssh_runner.go:195] Run: systemctl --version
	I0325 02:10:09.996689  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.996708  499754 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:10:09.996794  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:10.033103  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:10.033453  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:10.134432  499754 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:10:10.145186  499754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:10:10.155725  499754 docker.go:183] disabling docker service ...
	I0325 02:10:10.155795  499754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:10:10.174113  499754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:10:10.183613  499754 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:10:10.266394  499754 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:10:10.348135  499754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:10:10.357620  499754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:10:10.370205  499754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:10:10.383477  499754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:10:10.389812  499754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:10:10.396907  499754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:10:10.474633  499754 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:10:10.542119  499754 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:10:10.542209  499754 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:10:10.546078  499754 start.go:462] Will wait 60s for crictl version
	I0325 02:10:10.546139  499754 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:10:10.570521  499754 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:10:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:10:21.620886  499754 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:10:21.644208  499754 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:10:21.644274  499754 ssh_runner.go:195] Run: containerd --version
	I0325 02:10:21.663628  499754 ssh_runner.go:195] Run: containerd --version
	I0325 02:10:21.687141  499754 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:10:21.687240  499754 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:10:21.722801  499754 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 02:10:21.726147  499754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:10:21.737667  499754 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:10:21.737773  499754 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:10:21.737837  499754 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:10:21.760396  499754 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:10:21.760417  499754 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:10:21.760461  499754 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:10:21.782452  499754 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:10:21.782473  499754 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:10:21.782514  499754 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:10:21.807097  499754 cni.go:93] Creating CNI manager for ""
	I0325 02:10:21.807132  499754 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:10:21.807148  499754 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:10:21.807166  499754 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220325020956-262786 NodeName:default-k8s-different-port-20220325020956-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:10:21.807323  499754 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220325020956-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:10:21.807405  499754 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220325020956-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0325 02:10:21.807460  499754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:10:21.814738  499754 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:10:21.814810  499754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:10:21.821717  499754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0325 02:10:21.834543  499754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:10:21.847823  499754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0325 02:10:21.860732  499754 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:10:21.863660  499754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:10:21.873114  499754 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786 for IP: 192.168.49.2
	I0325 02:10:21.873219  499754 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:10:21.873254  499754 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:10:21.873300  499754 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key
	I0325 02:10:21.873315  499754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.crt with IP's: []
	I0325 02:10:22.086042  499754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.crt ...
	I0325 02:10:22.086079  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.crt: {Name:mk3fa505ddeadfb71ae12132b08f65cbaffe9fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.086286  499754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key ...
	I0325 02:10:22.086301  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key: {Name:mk8cd1621ad5fabcdccd60fe08084c6c3c77af54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.086405  499754 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2
	I0325 02:10:22.086422  499754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 02:10:22.211354  499754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt.dd3b5fb2 ...
	I0325 02:10:22.211391  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt.dd3b5fb2: {Name:mk73aa504bc3cba2d5ab4ac154baa063fd0d5f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.211604  499754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2 ...
	I0325 02:10:22.211621  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2: {Name:mk9ee3873ae9b5591bffcf4d5c0014be09430042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.211717  499754 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt
	I0325 02:10:22.211781  499754 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key
	I0325 02:10:22.211835  499754 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key
	I0325 02:10:22.211852  499754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt with IP's: []
	I0325 02:10:22.265235  499754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt ...
	I0325 02:10:22.265270  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt: {Name:mk264bc0fed6e01799cd161be0b8068147ee22be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.265482  499754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key ...
	I0325 02:10:22.265498  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key: {Name:mk2f43ee9cbf3f46b001e7c43d5526eb55c53254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.265681  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:10:22.265732  499754 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:10:22.265753  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:10:22.265782  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:10:22.265810  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:10:22.265836  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:10:22.265877  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:10:22.267336  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:10:22.286155  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:10:22.303500  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:10:22.321023  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:10:22.338447  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:10:22.355546  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:10:22.373068  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:10:22.391964  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:10:22.410726  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:10:22.429538  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:10:22.449045  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:10:22.468241  499754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:10:22.481065  499754 ssh_runner.go:195] Run: openssl version
	I0325 02:10:22.486711  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:10:22.494921  499754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:10:22.498209  499754 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:10:22.498300  499754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:10:22.503928  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:10:22.511759  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:10:22.519485  499754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:10:22.522862  499754 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:10:22.522914  499754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:10:22.527845  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:10:22.535681  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:10:22.543890  499754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:10:22.547114  499754 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:10:22.547170  499754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:10:22.552138  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:10:22.559456  499754 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:10:22.559551  499754 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:10:22.559605  499754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:10:22.583656  499754 cri.go:87] found id: ""
	I0325 02:10:22.583731  499754 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:10:22.590936  499754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:10:22.597739  499754 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:10:22.597789  499754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:10:22.604662  499754 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:10:22.604711  499754 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:10:38.485167  499754 out.go:203]   - Generating certificates and keys ...
	I0325 02:10:38.488498  499754 out.go:203]   - Booting up control plane ...
	I0325 02:10:38.491047  499754 out.go:203]   - Configuring RBAC rules ...
	I0325 02:10:38.492754  499754 cni.go:93] Creating CNI manager for ""
	I0325 02:10:38.492778  499754 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:10:38.494300  499754 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:10:38.494356  499754 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:10:38.498637  499754 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:10:38.498656  499754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:10:38.511807  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:10:39.348464  499754 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:10:39.348575  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786 minikube.k8s.io/updated_at=2022_03_25T02_10_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:39.348576  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:39.416878  499754 ops.go:34] apiserver oom_adj: -16
	I0325 02:10:39.417009  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:39.973782  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:40.473400  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:40.974027  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:41.473940  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:41.973555  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:42.473628  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:42.973153  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:43.473122  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:43.974112  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:44.473987  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:44.973876  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:45.474137  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:45.973209  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:46.473313  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:46.974049  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:47.473211  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:47.973274  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:48.474093  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:48.974045  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:49.473780  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:49.973598  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:50.473469  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:50.973342  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:51.473230  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:51.528991  499754 kubeadm.go:1020] duration metric: took 12.180509126s to wait for elevateKubeSystemPrivileges.
	I0325 02:10:51.529033  499754 kubeadm.go:393] StartCluster complete in 28.969578042s
	I0325 02:10:51.529051  499754 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:51.529136  499754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:10:51.530502  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:52.047005  499754 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220325020956-262786" rescaled to 1
	I0325 02:10:52.047116  499754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:10:52.047120  499754 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:10:52.049299  499754 out.go:176] * Verifying Kubernetes components...
	I0325 02:10:52.049361  499754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:10:52.047200  499754 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 02:10:52.047417  499754 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:10:52.049465  499754 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:10:52.049483  499754 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220325020956-262786"
	I0325 02:10:52.049449  499754 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:10:52.051250  499754 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:10:52.051270  499754 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:10:52.049922  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:52.051307  499754 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:10:52.051846  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:52.095525  499754 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:10:52.095669  499754 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:10:52.095682  499754 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:10:52.095738  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:52.099391  499754 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:10:52.099423  499754 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:10:52.099457  499754 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:10:52.100002  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:52.117112  499754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:10:52.118624  499754 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:10:52.145585  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:52.145943  499754 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:10:52.145966  499754 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:10:52.146021  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:52.181889  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:52.397643  499754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:10:52.397773  499754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:10:52.500022  499754 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0325 02:10:52.740166  499754 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 02:10:52.740200  499754 addons.go:417] enableAddons completed in 693.011953ms
	I0325 02:10:54.125869  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:10:56.126258  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:10:58.626709  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:01.126502  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:03.626891  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:06.126044  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:08.126602  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:10.626152  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:13.125939  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:15.126097  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:17.126879  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:19.626460  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:22.126746  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:24.625938  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:26.626821  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:29.125973  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:31.126451  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:33.625824  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:35.626488  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:38.126413  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:40.126766  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:42.626417  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:45.126171  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:47.626560  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:50.126171  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:52.126554  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:54.126864  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:56.626282  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:58.626784  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:01.126195  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:03.126534  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:05.126676  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:07.126769  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:09.626009  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:11.626499  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:14.126333  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:16.127511  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:18.626217  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:21.125909  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:23.625969  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:25.626369  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:27.626709  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:30.126323  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:32.126500  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:34.626576  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:37.126550  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:39.626394  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:42.126518  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:44.626536  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:47.125702  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:49.126898  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:51.625941  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:53.626161  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:56.126674  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:58.626173  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:00.626748  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:03.126612  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:05.127004  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:07.625980  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:10.126585  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:12.626225  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:14.626565  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:17.126776  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:19.626060  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:21.626308  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:23.626421  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:25.626666  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:28.126547  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:30.126815  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:32.127249  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:34.626237  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:37.126783  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:39.626472  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:41.626594  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:44.126195  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:46.126248  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:48.126623  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:50.626430  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:53.126872  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:55.626160  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:58.126202  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:00.126773  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:02.626061  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:05.126573  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:07.626731  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:09.628112  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:12.126217  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:14.126733  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:16.626727  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:18.626880  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:21.126074  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:23.126324  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:25.126819  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:27.626496  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:30.126312  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:32.127310  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:34.626485  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:36.626528  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:39.126798  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:41.127164  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:43.626585  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:46.126803  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:48.127039  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:50.127543  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:52.128852  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:52.128877  499754 node_ready.go:38] duration metric: took 4m0.010219075s waiting for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:14:52.131207  499754 out.go:176] 
	W0325 02:14:52.131355  499754 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:14:52.131371  499754 out.go:241] * 
	* 
	W0325 02:14:52.132231  499754 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:14:52.134067  499754 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220325020956-262786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20220325020956-262786
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20220325020956-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4",
	        "Created": "2022-03-25T02:10:07.830065737Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:10:08.208646726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hosts",
	        "LogPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4-json.log",
	        "Name": "/default-k8s-different-port-20220325020956-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220325020956-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220325020956-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220325020956-262786",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220325020956-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220325020956-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "752dba0b0d51e54f65ae14a0ffc9beb457cc13e80db6430b791d2057b780914e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49574"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49573"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49570"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49572"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49571"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/752dba0b0d51",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220325020956-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e271f66fa8d",
	                        "default-k8s-different-port-20220325020956-262786"
	                    ],
	                    "NetworkID": "c5c0224540019d877be5e36bfc556dc0a2d83980f6e5b563be26e38eaad27a38",
	                    "EndpointID": "ded2360703a0715d75d023434cbc7944232d0b2cfe6e083bf6f1fbb0113e0018",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220325020956-262786 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20220325020956-262786 logs -n 25: (1.051985536s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                   Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | calico-20220325014921-262786                      | calico-20220325014921-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:55 UTC | Fri, 25 Mar 2022 02:02:55 UTC |
	|         | logs -n 25                                        |                                             |         |         |                               |                               |
	| delete  | -p                                                | calico-20220325014921-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:56 UTC | Fri, 25 Mar 2022 02:02:59 UTC |
	|         | calico-20220325014921-262786                      |                                             |         |         |                               |                               |
	| -p      | custom-weave-20220325014921-262786                | custom-weave-20220325014921-262786          | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:22 UTC | Fri, 25 Mar 2022 02:03:23 UTC |
	|         | logs -n 25                                        |                                             |         |         |                               |                               |
	| delete  | -p                                                | custom-weave-20220325014921-262786          | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:24 UTC | Fri, 25 Mar 2022 02:03:26 UTC |
	|         | custom-weave-20220325014921-262786                |                                             |         |         |                               |                               |
	| start   | -p                                                | bridge-20220325014920-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:02:59 UTC | Fri, 25 Mar 2022 02:03:56 UTC |
	|         | bridge-20220325014920-262786                      |                                             |         |         |                               |                               |
	|         | --memory=2048                                     |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                             |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                             |         |         |                               |                               |
	|         | --cni=bridge --driver=docker                      |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                             |         |         |                               |                               |
	| ssh     | -p                                                | bridge-20220325014920-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:03:57 UTC | Fri, 25 Mar 2022 02:03:57 UTC |
	|         | bridge-20220325014920-262786                      |                                             |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                             |         |         |                               |                               |
	| -p      | enable-default-cni-20220325014920-262786          | enable-default-cni-20220325014920-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:38 UTC | Fri, 25 Mar 2022 02:07:39 UTC |
	|         | logs -n 25                                        |                                             |         |         |                               |                               |
	| delete  | -p                                                | enable-default-cni-20220325014920-262786    | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:40 UTC | Fri, 25 Mar 2022 02:07:43 UTC |
	|         | enable-default-cni-20220325014920-262786          |                                             |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                  | no-preload-20220325020326-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:18 UTC | Fri, 25 Mar 2022 02:08:19 UTC |
	|         | logs -n 25                                        |                                             |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20220325020743-262786           | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:07:43 UTC | Fri, 25 Mar 2022 02:08:42 UTC |
	|         | embed-certs-20220325020743-262786                 |                                             |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                             |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                             |         |         |                               |                               |
	|         | --driver=docker                                   |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                             |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                      |                                             |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20220325020743-262786           | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:51 UTC | Fri, 25 Mar 2022 02:08:52 UTC |
	|         | embed-certs-20220325020743-262786                 |                                             |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                             |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                             |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20220325020743-262786           | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:08:52 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                 |                                             |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                             |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20220325020743-262786           | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:09:12 UTC |
	|         | embed-certs-20220325020743-262786                 |                                             |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                             |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786             | old-k8s-version-20220325015306-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:29 UTC | Fri, 25 Mar 2022 02:09:30 UTC |
	|         | logs -n 25                                        |                                             |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786             | old-k8s-version-20220325015306-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:31 UTC | Fri, 25 Mar 2022 02:09:32 UTC |
	|         | logs -n 25                                        |                                             |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220325015306-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:32 UTC | Fri, 25 Mar 2022 02:09:33 UTC |
	|         | old-k8s-version-20220325015306-262786             |                                             |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                             |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                             |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20220325015306-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:33 UTC | Fri, 25 Mar 2022 02:09:39 UTC |
	|         | old-k8s-version-20220325015306-262786             |                                             |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                             |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20220325015306-262786       | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:39 UTC | Fri, 25 Mar 2022 02:09:39 UTC |
	|         | old-k8s-version-20220325015306-262786             |                                             |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                             |         |         |                               |                               |
	| -p      | bridge-20220325014920-262786                      | bridge-20220325014920-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:52 UTC | Fri, 25 Mar 2022 02:09:53 UTC |
	|         | logs -n 25                                        |                                             |         |         |                               |                               |
	| delete  | -p                                                | bridge-20220325014920-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:53 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | bridge-20220325014920-262786                      |                                             |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:56 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | disable-driver-mounts-20220325020956-262786       |                                             |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20220325020743-262786           | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:14:36 UTC |
	|         | embed-certs-20220325020743-262786                 |                                             |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                             |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                             |         |         |                               |                               |
	|         | --driver=docker                                   |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                             |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                      |                                             |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20220325020743-262786           | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:47 UTC |
	|         | embed-certs-20220325020743-262786                 |                                             |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                             |         |         |                               |                               |
	| pause   | -p                                                | embed-certs-20220325020743-262786           | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:48 UTC |
	|         | embed-certs-20220325020743-262786                 |                                             |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                             |         |         |                               |                               |
	| unpause | -p                                                | embed-certs-20220325020743-262786           | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:49 UTC | Fri, 25 Mar 2022 02:14:50 UTC |
	|         | embed-certs-20220325020743-262786                 |                                             |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                             |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:09:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:09:57.047407  499754 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:09:57.047555  499754 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:57.047569  499754 out.go:310] Setting ErrFile to fd 2...
	I0325 02:09:57.047574  499754 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:09:57.047691  499754 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:09:57.047991  499754 out.go:304] Setting JSON to false
	I0325 02:09:57.049296  499754 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17269,"bootTime":1648156928,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:09:57.049376  499754 start.go:125] virtualization: kvm guest
	I0325 02:09:57.052171  499754 out.go:176] * [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:09:57.053840  499754 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:09:57.052363  499754 notify.go:193] Checking for updates...
	I0325 02:09:57.055543  499754 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:09:57.057049  499754 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:09:57.058584  499754 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:09:57.059988  499754 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:09:57.061783  499754 config.go:176] Loaded profile config "embed-certs-20220325020743-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:09:57.061898  499754 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:09:57.061977  499754 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0325 02:09:57.062034  499754 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:09:57.105819  499754 docker.go:136] docker version: linux-20.10.14
	I0325 02:09:57.105933  499754 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:57.205697  499754 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:57.137437369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:09:57.205810  499754 docker.go:253] overlay module found
	I0325 02:09:57.208293  499754 out.go:176] * Using the docker driver based on user configuration
	I0325 02:09:57.208333  499754 start.go:284] selected driver: docker
	I0325 02:09:57.208341  499754 start.go:801] validating driver "docker" against <nil>
	I0325 02:09:57.208362  499754 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:09:57.208430  499754 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:09:57.208452  499754 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:09:57.210227  499754 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:09:57.210929  499754 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:09:57.337156  499754 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:09:57.251383592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:09:57.337361  499754 start_flags.go:290] no existing cluster config was found, will generate one from the flags 
	I0325 02:09:57.337580  499754 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:09:57.337610  499754 cni.go:93] Creating CNI manager for ""
	I0325 02:09:57.337622  499754 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:09:57.337636  499754 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:09:57.337643  499754 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0325 02:09:57.337652  499754 start_flags.go:299] Found "CNI" CNI - setting NetworkPlugin=cni
	I0325 02:09:57.337665  499754 start_flags.go:304] config:
	{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:57.340184  499754 out.go:176] * Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	I0325 02:09:57.340226  499754 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:09:57.341673  499754 out.go:176] * Pulling base image ...
	I0325 02:09:57.341695  499754 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:09:57.341729  499754 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:09:57.341751  499754 cache.go:57] Caching tarball of preloaded images
	I0325 02:09:57.341797  499754 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:09:57.341986  499754 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:09:57.342010  499754 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:09:57.342169  499754 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:09:57.342200  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json: {Name:mk6616c41d8f156711ece89bc5f7c3ef7182991a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:09:57.377860  499754 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:09:57.377891  499754 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:09:57.377908  499754 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:09:57.377958  499754 start.go:348] acquiring machines lock for default-k8s-different-port-20220325020956-262786: {Name:mk1740da455fcceda9a6f7400776a3a68790d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:09:57.378118  499754 start.go:352] acquired machines lock for "default-k8s-different-port-20220325020956-262786" in 134.875µs
	I0325 02:09:57.378156  499754 start.go:90] Provisioning new machine with config: &{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-202203250209
56-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:09:57.378297  499754 start.go:127] createHost starting for "" (driver="docker")
	I0325 02:09:52.953117  493081 pod_ready.go:102] pod "etcd-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"False"
	I0325 02:09:55.452601  493081 pod_ready.go:102] pod "etcd-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"False"
	I0325 02:09:57.452828  493081 pod_ready.go:92] pod "etcd-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 02:09:57.452865  493081 pod_ready.go:81] duration metric: took 11.009877863s waiting for pod "etcd-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:57.452883  493081 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:57.457611  493081 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 02:09:57.457632  493081 pod_ready.go:81] duration metric: took 4.740971ms waiting for pod "kube-apiserver-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:57.457645  493081 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:56.454189  496534 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:09:56.454280  496534 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0325 02:09:56.454358  496534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:09:56.478863  496534 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:09:56.478889  496534 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:09:56.478968  496534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:09:56.506790  496534 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:09:56.506814  496534 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:09:56.506861  496534 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:09:56.534382  496534 cni.go:93] Creating CNI manager for ""
	I0325 02:09:56.534407  496534 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:09:56.534417  496534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:09:56.534432  496534 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220325015306-262786 NodeName:old-k8s-version-20220325015306-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:09:56.534591  496534 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220325015306-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220325015306-262786
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:09:56.534694  496534 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220325015306-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 02:09:56.534750  496534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0325 02:09:56.542421  496534 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:09:56.542495  496534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:09:56.549460  496534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 02:09:56.562979  496534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:09:56.577951  496534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0325 02:09:56.592248  496534 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:09:56.595272  496534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:09:56.604751  496534 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786 for IP: 192.168.76.2
	I0325 02:09:56.604865  496534 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:09:56.604900  496534 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:09:56.604970  496534 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key
	I0325 02:09:56.605043  496534 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25
	I0325 02:09:56.605077  496534 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key
	I0325 02:09:56.605175  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:09:56.605204  496534 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:09:56.605215  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:09:56.605238  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:09:56.605269  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:09:56.605293  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:09:56.605330  496534 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:09:56.605914  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:09:56.624688  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 02:09:56.642854  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:09:56.661109  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0325 02:09:56.679335  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:09:56.697776  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:09:56.717820  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:09:56.736028  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:09:56.754670  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:09:56.773858  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:09:56.794511  496534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:09:56.814526  496534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:09:56.827928  496534 ssh_runner.go:195] Run: openssl version
	I0325 02:09:56.833166  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:09:56.841819  496534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:09:56.844899  496534 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:09:56.844950  496534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:09:56.849575  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:09:56.856804  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:09:56.865991  496534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:09:56.870126  496534 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:09:56.870181  496534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:09:56.875734  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:09:56.883338  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:09:56.891273  496534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:09:56.894298  496534 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:09:56.894349  496534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:09:56.899624  496534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:09:56.907811  496534 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:09:56.907928  496534 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:09:56.907967  496534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:09:56.934628  496534 cri.go:87] found id: "9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c"
	I0325 02:09:56.934666  496534 cri.go:87] found id: "f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e"
	I0325 02:09:56.934675  496534 cri.go:87] found id: "2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0"
	I0325 02:09:56.934690  496534 cri.go:87] found id: "0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95"
	I0325 02:09:56.934698  496534 cri.go:87] found id: "0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a"
	I0325 02:09:56.934707  496534 cri.go:87] found id: "1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa"
	I0325 02:09:56.934715  496534 cri.go:87] found id: ""
	I0325 02:09:56.934765  496534 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:09:56.951852  496534 cri.go:114] JSON = null
	W0325 02:09:56.951908  496534 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0325 02:09:56.951962  496534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:09:56.959522  496534 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:09:56.959549  496534 kubeadm.go:601] restartCluster start
	I0325 02:09:56.959604  496534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:09:56.966307  496534 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:56.967152  496534 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220325015306-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:09:56.967679  496534 kubeconfig.go:127] "old-k8s-version-20220325015306-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:09:56.968491  496534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:09:56.970296  496534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:09:56.977549  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:56.977608  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:56.986444  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.186898  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.187004  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:57.196337  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.387521  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.387591  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:57.397853  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.586709  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.587435  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:57.597694  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.786931  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.787030  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:57.796873  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.987229  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:57.987316  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.021144  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.187383  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.187493  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.199074  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.387227  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.387310  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.396431  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.586584  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.586654  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.595808  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.787151  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.787256  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.796540  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:58.986719  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:58.986822  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:58.995845  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.187028  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.187118  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.195937  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.387149  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.387235  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.396773  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.587224  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.587314  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.596213  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:57.381757  499754 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0325 02:09:57.382033  499754 start.go:161] libmachine.API.Create for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:09:57.382073  499754 client.go:168] LocalClient.Create starting
	I0325 02:09:57.382153  499754 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0325 02:09:57.382199  499754 main.go:130] libmachine: Decoding PEM data...
	I0325 02:09:57.382233  499754 main.go:130] libmachine: Parsing certificate...
	I0325 02:09:57.382304  499754 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0325 02:09:57.382333  499754 main.go:130] libmachine: Decoding PEM data...
	I0325 02:09:57.382352  499754 main.go:130] libmachine: Parsing certificate...
	I0325 02:09:57.382749  499754 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0325 02:09:57.418938  499754 cli_runner.go:180] docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0325 02:09:57.419018  499754 network_create.go:254] running [docker network inspect default-k8s-different-port-20220325020956-262786] to gather additional debugging logs...
	I0325 02:09:57.419039  499754 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786
	W0325 02:09:57.455464  499754 cli_runner.go:180] docker network inspect default-k8s-different-port-20220325020956-262786 returned with exit code 1
	I0325 02:09:57.455496  499754 network_create.go:257] error running [docker network inspect default-k8s-different-port-20220325020956-262786]: docker network inspect default-k8s-different-port-20220325020956-262786: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220325020956-262786
	I0325 02:09:57.455540  499754 network_create.go:259] output of [docker network inspect default-k8s-different-port-20220325020956-262786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220325020956-262786
	
	** /stderr **
	I0325 02:09:57.455625  499754 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:09:57.493122  499754 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000384608] misses:0}
	I0325 02:09:57.493180  499754 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0325 02:09:57.493196  499754 network_create.go:106] attempt to create docker network default-k8s-different-port-20220325020956-262786 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0325 02:09:57.493244  499754 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220325020956-262786
	I0325 02:09:57.569900  499754 network_create.go:90] docker network default-k8s-different-port-20220325020956-262786 192.168.49.0/24 created
	I0325 02:09:57.569948  499754 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20220325020956-262786" container
	I0325 02:09:57.570021  499754 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0325 02:09:57.608487  499754 cli_runner.go:133] Run: docker volume create default-k8s-different-port-20220325020956-262786 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220325020956-262786 --label created_by.minikube.sigs.k8s.io=true
	I0325 02:09:57.643256  499754 oci.go:102] Successfully created a docker volume default-k8s-different-port-20220325020956-262786
	I0325 02:09:57.643337  499754 cli_runner.go:133] Run: docker run --rm --name default-k8s-different-port-20220325020956-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220325020956-262786 --entrypoint /usr/bin/test -v default-k8s-different-port-20220325020956-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0325 02:09:58.217460  499754 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20220325020956-262786
	I0325 02:09:58.217533  499754 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:09:58.217575  499754 kic.go:179] Starting extracting preloaded images to volume ...
	I0325 02:09:58.217660  499754 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220325020956-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0325 02:09:58.468262  493081 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 02:09:58.468300  493081 pod_ready.go:81] duration metric: took 1.010645348s waiting for pod "kube-controller-manager-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:58.468316  493081 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tln8x" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:58.472665  493081 pod_ready.go:92] pod "kube-proxy-tln8x" in "kube-system" namespace has status "Ready":"True"
	I0325 02:09:58.472684  493081 pod_ready.go:81] duration metric: took 4.36082ms waiting for pod "kube-proxy-tln8x" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:58.472693  493081 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:58.476628  493081 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 02:09:58.476650  493081 pod_ready.go:81] duration metric: took 3.951806ms waiting for pod "kube-scheduler-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:09:58.476659  493081 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace to be "Ready" ...
	I0325 02:10:00.657739  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:09:59.786619  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.786707  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.796501  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.986664  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.986763  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:09:59.995703  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:09:59.995733  496534 api_server.go:165] Checking apiserver status ...
	I0325 02:09:59.995777  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:10:00.003939  496534 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:10:00.003973  496534 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:10:00.003983  496534 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:10:00.003999  496534 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:10:00.004053  496534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:10:00.030310  496534 cri.go:87] found id: "9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c"
	I0325 02:10:00.030347  496534 cri.go:87] found id: "f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e"
	I0325 02:10:00.030357  496534 cri.go:87] found id: "2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0"
	I0325 02:10:00.030366  496534 cri.go:87] found id: "0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95"
	I0325 02:10:00.030374  496534 cri.go:87] found id: "0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a"
	I0325 02:10:00.030382  496534 cri.go:87] found id: "1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa"
	I0325 02:10:00.030389  496534 cri.go:87] found id: ""
	I0325 02:10:00.030396  496534 cri.go:232] Stopping containers: [9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e 2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0 0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95 0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a 1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa]
	I0325 02:10:00.030441  496534 ssh_runner.go:195] Run: which crictl
	I0325 02:10:00.033546  496534 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9d536416454c91006817f6465128489c3e8fcbae9a458bffeaec28c268a9a65c f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e 2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0 0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95 0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a 1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa
	I0325 02:10:00.062332  496534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:10:00.072359  496534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:10:00.079466  496534 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Mar 25 01:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Mar 25 01:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5939 Mar 25 01:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Mar 25 01:57 /etc/kubernetes/scheduler.conf
	
	I0325 02:10:00.079531  496534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:10:00.086733  496534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:10:00.093510  496534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:10:00.100688  496534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:10:00.108477  496534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:10:00.116249  496534 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:10:00.116273  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:00.175933  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:00.770714  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:00.943605  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:01.010715  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:01.116804  496534 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:10:01.116888  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:10:01.626820  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:10:02.127178  496534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:10:02.192735  496534 api_server.go:71] duration metric: took 1.075932213s to wait for apiserver process to appear ...
	I0325 02:10:02.192775  496534 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:10:02.192791  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:02.193182  496534 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0325 02:10:02.693918  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:03.157771  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:05.158252  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:07.658178  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:06.604627  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:10:06.604658  496534 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:10:06.693941  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:06.813382  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:10:06.813427  496534 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:10:07.194038  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:07.199671  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:10:07.199698  496534 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:10:07.693961  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:07.698533  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:10:07.698567  496534 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:10:08.193736  496534 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0325 02:10:08.200650  496534 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0325 02:10:08.209168  496534 api_server.go:140] control plane version: v1.16.0
	I0325 02:10:08.209201  496534 api_server.go:130] duration metric: took 6.016418382s to wait for apiserver health ...
	I0325 02:10:08.209214  496534 cni.go:93] Creating CNI manager for ""
	I0325 02:10:08.209222  496534 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:10:08.211937  496534 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:10:08.211995  496534 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:10:08.216151  496534 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0325 02:10:08.216175  496534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:10:08.230684  496534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:10:08.456529  496534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:10:08.463796  496534 system_pods.go:59] 8 kube-system pods found
	I0325 02:10:08.463830  496534 system_pods.go:61] "coredns-5644d7b6d9-trm4j" [9facf37e-d2f8-4d16-bde1-5c3063be4439] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0325 02:10:08.463837  496534 system_pods.go:61] "etcd-old-k8s-version-20220325015306-262786" [b44593d0-68c8-4a88-942a-108ed1c244c6] Running
	I0325 02:10:08.463844  496534 system_pods.go:61] "kindnet-rx7hj" [bf35a126-09fa-4db9-9aa4-2cb811bf4595] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:10:08.463851  496534 system_pods.go:61] "kube-apiserver-old-k8s-version-20220325015306-262786" [71c77eeb-8312-4550-8800-57f74f6c9c19] Running
	I0325 02:10:08.463856  496534 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220325015306-262786" [d0b14926-624e-4820-8cbd-ddceaaea8158] Running
	I0325 02:10:08.463860  496534 system_pods.go:61] "kube-proxy-wxllf" [8df13659-eaff-4414-b783-5e971e2dae50] Running
	I0325 02:10:08.463864  496534 system_pods.go:61] "kube-scheduler-old-k8s-version-20220325015306-262786" [1959bc6c-50bb-4c1a-b023-da9e0537f39b] Running
	I0325 02:10:08.463869  496534 system_pods.go:61] "storage-provisioner" [883b8731-7316-4492-8ee8-5ad30fc133c0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0325 02:10:08.463876  496534 system_pods.go:74] duration metric: took 7.324225ms to wait for pod list to return data ...
	I0325 02:10:08.463886  496534 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:10:08.466259  496534 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:10:08.466284  496534 node_conditions.go:123] node cpu capacity is 8
	I0325 02:10:08.466295  496534 node_conditions.go:105] duration metric: took 2.40246ms to run NodePressure ...
	I0325 02:10:08.466314  496534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:10:08.685667  496534 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:10:08.690463  496534 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0325 02:10:09.055023  496534 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0325 02:10:09.496283  496534 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0325 02:10:07.677687  499754 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220325020956-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (9.45997079s)
	I0325 02:10:07.677755  499754 kic.go:188] duration metric: took 9.460175 seconds to extract preloaded images to volume
	W0325 02:10:07.677813  499754 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0325 02:10:07.677832  499754 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0325 02:10:07.677944  499754 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0325 02:10:07.782903  499754 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220325020956-262786 --name default-k8s-different-port-20220325020956-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220325020956-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220325020956-262786 --network default-k8s-different-port-20220325020956-262786 --ip 192.168.49.2 --volume default-k8s-different-port-20220325020956-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9
bd7bdca30a55218347b5
	I0325 02:10:08.218357  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Running}}
	I0325 02:10:08.261389  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:08.303094  499754 cli_runner.go:133] Run: docker exec default-k8s-different-port-20220325020956-262786 stat /var/lib/dpkg/alternatives/iptables
	I0325 02:10:08.376219  499754 oci.go:281] the created container "default-k8s-different-port-20220325020956-262786" has a running status.
	I0325 02:10:08.376256  499754 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa...
	I0325 02:10:08.589639  499754 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0325 02:10:08.698073  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:08.736950  499754 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0325 02:10:08.736977  499754 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220325020956-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0325 02:10:08.829176  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:08.866417  499754 machine.go:88] provisioning docker machine ...
	I0325 02:10:08.866468  499754 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220325020956-262786"
	I0325 02:10:08.866534  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:08.903231  499754 main.go:130] libmachine: Using SSH client type: native
	I0325 02:10:08.903464  499754 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49574 <nil> <nil>}
	I0325 02:10:08.903490  499754 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220325020956-262786 && echo "default-k8s-different-port-20220325020956-262786" | sudo tee /etc/hostname
	I0325 02:10:09.040143  499754 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220325020956-262786
	
	I0325 02:10:09.040223  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.075006  499754 main.go:130] libmachine: Using SSH client type: native
	I0325 02:10:09.075167  499754 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49574 <nil> <nil>}
	I0325 02:10:09.075194  499754 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220325020956-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220325020956-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220325020956-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:10:09.198799  499754 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:10:09.198848  499754 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:10:09.198872  499754 ubuntu.go:177] setting up certificates
	I0325 02:10:09.198884  499754 provision.go:83] configureAuth start
	I0325 02:10:09.198944  499754 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.231874  499754 provision.go:138] copyHostCerts
	I0325 02:10:09.231935  499754 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:10:09.231946  499754 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:10:09.232012  499754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:10:09.232111  499754 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:10:09.232127  499754 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:10:09.232160  499754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:10:09.232229  499754 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:10:09.232243  499754 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:10:09.232276  499754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:10:09.232335  499754 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220325020956-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220325020956-262786]
	I0325 02:10:09.484166  499754 provision.go:172] copyRemoteCerts
	I0325 02:10:09.484235  499754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:10:09.484295  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.519681  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:09.610194  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:10:09.627463  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0325 02:10:09.644929  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:10:09.662266  499754 provision.go:86] duration metric: configureAuth took 463.370434ms
	I0325 02:10:09.662291  499754 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:10:09.662460  499754 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:10:09.662482  499754 machine.go:91] provisioned docker machine in 796.03566ms
	I0325 02:10:09.662488  499754 client.go:171] LocalClient.Create took 12.280404225s
	I0325 02:10:09.662507  499754 start.go:169] duration metric: libmachine.API.Create for "default-k8s-different-port-20220325020956-262786" took 12.280475448s
	I0325 02:10:09.662521  499754 start.go:302] post-start starting for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:10:09.662528  499754 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:10:09.662596  499754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:10:09.662644  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.695792  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:09.782770  499754 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:10:09.785512  499754 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:10:09.785540  499754 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:10:09.785555  499754 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:10:09.785565  499754 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:10:09.785581  499754 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:10:09.785640  499754 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:10:09.785745  499754 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:10:09.785860  499754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:10:09.792604  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:10:09.810666  499754 start.go:305] post-start completed in 148.129279ms
	I0325 02:10:09.811078  499754 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.843910  499754 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:10:09.844135  499754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:10:09.844175  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.875677  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:09.959378  499754 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:10:09.963145  499754 start.go:130] duration metric: createHost completed in 12.584833183s
	I0325 02:10:09.963173  499754 start.go:81] releasing machines lock for "default-k8s-different-port-20220325020956-262786", held for 12.58503599s
	I0325 02:10:09.963258  499754 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.996631  499754 ssh_runner.go:195] Run: systemctl --version
	I0325 02:10:09.996689  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:09.996708  499754 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:10:09.996794  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:10.033103  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:10.033453  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:10.134432  499754 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:10:10.145186  499754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:10:10.155725  499754 docker.go:183] disabling docker service ...
	I0325 02:10:10.155795  499754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:10:10.174113  499754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:10:10.183613  499754 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:10:10.266394  499754 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:10:10.348135  499754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:10:10.357620  499754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:10:10.370205  499754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:10:10.383477  499754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:10:10.389812  499754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:10:10.396907  499754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:10:10.474633  499754 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:10:10.542119  499754 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:10:10.542209  499754 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:10:10.546078  499754 start.go:462] Will wait 60s for crictl version
	I0325 02:10:10.546139  499754 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:10:10.570521  499754 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:10:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:10:10.158133  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:12.657546  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:10.028872  496534 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0325 02:10:10.813640  496534 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0325 02:10:12.320569  496534 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0325 02:10:13.398744  496534 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0325 02:10:14.657669  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:17.157876  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:15.272709  496534 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0325 02:10:17.826836  496534 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0325 02:10:21.620886  499754 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:10:21.644208  499754 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:10:21.644274  499754 ssh_runner.go:195] Run: containerd --version
	I0325 02:10:21.663628  499754 ssh_runner.go:195] Run: containerd --version
	I0325 02:10:21.687141  499754 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:10:21.687240  499754 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:10:21.722801  499754 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 02:10:21.726147  499754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:10:21.737667  499754 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:10:21.737773  499754 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:10:21.737837  499754 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:10:21.760396  499754 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:10:21.760417  499754 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:10:21.760461  499754 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:10:21.782452  499754 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:10:21.782473  499754 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:10:21.782514  499754 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:10:21.807097  499754 cni.go:93] Creating CNI manager for ""
	I0325 02:10:21.807132  499754 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:10:21.807148  499754 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:10:21.807166  499754 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220325020956-262786 NodeName:default-k8s-different-port-20220325020956-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:10:21.807323  499754 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220325020956-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:10:21.807405  499754 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220325020956-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0325 02:10:21.807460  499754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:10:21.814738  499754 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:10:21.814810  499754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:10:21.821717  499754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0325 02:10:21.834543  499754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:10:21.847823  499754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0325 02:10:21.860732  499754 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:10:21.863660  499754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:10:21.873114  499754 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786 for IP: 192.168.49.2
	I0325 02:10:21.873219  499754 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:10:21.873254  499754 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:10:21.873300  499754 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key
	I0325 02:10:21.873315  499754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.crt with IP's: []
	I0325 02:10:19.657548  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:21.657930  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:22.964529  496534 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0325 02:10:22.086042  499754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.crt ...
	I0325 02:10:22.086079  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.crt: {Name:mk3fa505ddeadfb71ae12132b08f65cbaffe9fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.086286  499754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key ...
	I0325 02:10:22.086301  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key: {Name:mk8cd1621ad5fabcdccd60fe08084c6c3c77af54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.086405  499754 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2
	I0325 02:10:22.086422  499754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0325 02:10:22.211354  499754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt.dd3b5fb2 ...
	I0325 02:10:22.211391  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt.dd3b5fb2: {Name:mk73aa504bc3cba2d5ab4ac154baa063fd0d5f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.211604  499754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2 ...
	I0325 02:10:22.211621  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2: {Name:mk9ee3873ae9b5591bffcf4d5c0014be09430042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.211717  499754 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt
	I0325 02:10:22.211781  499754 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key
	I0325 02:10:22.211835  499754 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key
	I0325 02:10:22.211852  499754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt with IP's: []
	I0325 02:10:22.265235  499754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt ...
	I0325 02:10:22.265270  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt: {Name:mk264bc0fed6e01799cd161be0b8068147ee22be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.265482  499754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key ...
	I0325 02:10:22.265498  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key: {Name:mk2f43ee9cbf3f46b001e7c43d5526eb55c53254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:22.265681  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:10:22.265732  499754 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:10:22.265753  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:10:22.265782  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:10:22.265810  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:10:22.265836  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:10:22.265877  499754 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:10:22.267336  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:10:22.286155  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:10:22.303500  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:10:22.321023  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:10:22.338447  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:10:22.355546  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:10:22.373068  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:10:22.391964  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:10:22.410726  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:10:22.429538  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:10:22.449045  499754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:10:22.468241  499754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:10:22.481065  499754 ssh_runner.go:195] Run: openssl version
	I0325 02:10:22.486711  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:10:22.494921  499754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:10:22.498209  499754 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:10:22.498300  499754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:10:22.503928  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:10:22.511759  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:10:22.519485  499754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:10:22.522862  499754 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:10:22.522914  499754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:10:22.527845  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:10:22.535681  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:10:22.543890  499754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:10:22.547114  499754 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:10:22.547170  499754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:10:22.552138  499754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:10:22.559456  499754 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:10:22.559551  499754 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:10:22.559605  499754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:10:22.583656  499754 cri.go:87] found id: ""
	I0325 02:10:22.583731  499754 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:10:22.590936  499754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:10:22.597739  499754 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:10:22.597789  499754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:10:22.604662  499754 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:10:22.604711  499754 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:10:24.157335  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:26.157649  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:28.656658  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:30.657787  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:32.727631  496534 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0325 02:10:33.157998  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:35.656712  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:37.658182  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:38.485167  499754 out.go:203]   - Generating certificates and keys ...
	I0325 02:10:38.488498  499754 out.go:203]   - Booting up control plane ...
	I0325 02:10:38.491047  499754 out.go:203]   - Configuring RBAC rules ...
	I0325 02:10:38.492754  499754 cni.go:93] Creating CNI manager for ""
	I0325 02:10:38.492778  499754 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:10:38.494300  499754 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:10:38.494356  499754 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:10:38.498637  499754 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:10:38.498656  499754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:10:38.511807  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:10:39.348464  499754 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:10:39.348575  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786 minikube.k8s.io/updated_at=2022_03_25T02_10_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:39.348576  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:39.416878  499754 ops.go:34] apiserver oom_adj: -16
	I0325 02:10:39.417009  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:39.973782  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:40.473400  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:40.974027  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:41.473940  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:41.973555  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:40.157859  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:42.657670  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:42.473628  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:42.973153  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:43.473122  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:43.974112  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:44.473987  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:44.973876  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:45.474137  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:45.973209  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:46.473313  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:46.974049  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:44.657969  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:47.157478  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:47.473211  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:47.973274  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:48.474093  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:48.974045  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:49.473780  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:49.973598  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:50.473469  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:50.973342  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:51.473230  499754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:10:51.528991  499754 kubeadm.go:1020] duration metric: took 12.180509126s to wait for elevateKubeSystemPrivileges.
	I0325 02:10:51.529033  499754 kubeadm.go:393] StartCluster complete in 28.969578042s
	I0325 02:10:51.529051  499754 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:51.529136  499754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:10:51.530502  499754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:10:52.047005  499754 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220325020956-262786" rescaled to 1
	I0325 02:10:52.047116  499754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:10:52.047120  499754 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:10:52.049299  499754 out.go:176] * Verifying Kubernetes components...
	I0325 02:10:52.049361  499754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:10:52.047200  499754 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0325 02:10:52.047417  499754 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:10:52.049465  499754 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:10:52.049483  499754 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220325020956-262786"
	I0325 02:10:52.049449  499754 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:10:52.051250  499754 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:10:52.051270  499754 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:10:52.049922  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:52.051307  499754 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:10:52.051846  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:49.157584  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:51.157627  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:52.095525  499754 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:10:52.095669  499754 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:10:52.095682  499754 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:10:52.095738  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:52.099391  499754 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:10:52.099423  499754 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:10:52.099457  499754 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:10:52.100002  499754 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:10:52.117112  499754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:10:52.118624  499754 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:10:52.145585  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:52.145943  499754 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:10:52.145966  499754 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:10:52.146021  499754 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:10:52.181889  499754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49574 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:10:52.397643  499754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:10:52.397773  499754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:10:52.500022  499754 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0325 02:10:51.672096  496534 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0325 02:10:52.740166  499754 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0325 02:10:52.740200  499754 addons.go:417] enableAddons completed in 693.011953ms
	I0325 02:10:54.125869  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:10:56.126258  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:10:53.656540  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:55.659452  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:10:58.626709  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:01.126502  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:10:58.157112  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:00.157774  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:02.657884  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:03.626891  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:06.126044  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:05.159425  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:07.656652  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:07.122083  496534 kubeadm.go:752] kubelet initialised
	I0325 02:11:07.122105  496534 kubeadm.go:753] duration metric: took 58.436406731s waiting for restarted kubelet to initialise ...
	I0325 02:11:07.122113  496534 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:11:07.126263  496534 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace to be "Ready" ...
	I0325 02:11:09.132444  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:08.126602  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:10.626152  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:09.657625  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:12.157492  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:11.632007  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:13.632324  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:13.125939  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:15.126097  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:14.657085  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:16.657178  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:16.132388  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:18.632014  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:17.126879  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:19.626460  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:19.157578  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:21.656580  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:21.132025  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:23.631983  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:22.126746  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:24.625938  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:26.626821  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:23.657369  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:25.658028  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:25.632464  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:28.131852  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:29.125973  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:31.126451  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:28.156635  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:30.157002  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:32.157776  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:30.632396  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:33.131560  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:33.625824  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:35.626488  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:34.657084  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:36.657639  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:35.132145  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:37.632739  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:38.126413  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:40.126766  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:39.157635  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:41.157860  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:40.132081  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:42.631618  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:44.631713  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:42.626417  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:45.126171  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:43.658011  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:46.157268  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:46.631932  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:48.632103  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:47.626560  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:50.126171  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:48.158393  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:50.656744  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:52.658124  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:51.132391  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:53.631360  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:52.126554  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:54.126864  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:56.626282  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:55.157604  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:57.157637  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:11:55.631562  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:57.632380  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:11:58.626784  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:01.126195  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:11:59.657507  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:02.157532  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:00.131822  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:02.132591  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:04.632261  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:03.126534  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:05.126676  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:04.157704  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:06.656109  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:06.632949  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:08.633116  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:07.126769  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:09.626009  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:11.626499  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:08.657532  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:10.659390  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:11.131646  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:13.131704  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:14.126333  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:16.127511  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:13.156938  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:15.157797  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:17.657538  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:15.132130  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:17.132393  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:19.631849  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:18.626217  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:21.125909  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:20.157723  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:22.657697  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:21.632408  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:24.132780  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:23.625969  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:25.626369  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:25.157678  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:27.657482  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:26.632411  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:28.632603  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:27.626709  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:30.126323  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:30.156814  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:32.156858  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:31.132181  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:33.132360  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:32.126500  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:34.626576  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:34.157056  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:36.157312  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:35.132525  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:37.632302  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:37.126550  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:39.626394  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:38.657452  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:41.156868  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:40.131788  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:42.131858  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:44.132177  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:42.126518  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:44.626536  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:43.657343  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:46.157589  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:46.132370  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:48.632249  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:47.125702  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:49.126898  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:51.625941  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:48.657411  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:51.157120  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:51.131576  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:53.132752  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:53.626161  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:56.126674  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:12:53.157321  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:55.657523  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:57.657806  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:12:55.632286  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:58.131404  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:12:58.626173  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:00.626748  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:00.157330  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:02.157517  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:00.132015  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:02.631784  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:04.632346  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:03.126612  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:05.127004  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:04.656760  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:06.657318  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:07.131834  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:09.132317  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:07.625980  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:10.126585  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:09.156810  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:11.157098  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:11.632370  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:14.131554  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:12.626225  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:14.626565  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:13.157653  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:15.656930  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:17.657102  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:16.132282  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:18.132434  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:17.126776  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:19.626060  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:21.626308  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:20.157757  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:22.657682  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:20.632257  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:23.131700  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:23.626421  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:25.626666  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:25.156852  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:27.657381  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:25.131884  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:27.132649  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:29.632386  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:28.126547  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:30.126815  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:30.157012  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:32.658302  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:32.131891  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:34.132513  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:32.127249  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:34.626237  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:35.157396  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:37.157714  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:36.632138  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:38.632532  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:37.126783  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:39.626472  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:41.626594  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:39.157820  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:41.656786  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:41.132654  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:43.631680  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:44.126195  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:46.126248  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:43.657758  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:46.157772  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:46.132612  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:48.632420  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:48.126623  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:50.626430  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:48.157854  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:50.658725  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:51.131574  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:53.131762  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:53.126872  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:55.626160  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:13:53.157817  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:55.657968  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:55.131847  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:57.631529  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:59.632010  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:13:58.157283  493081 pod_ready.go:102] pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace has status "Ready":"False"
	I0325 02:13:58.651567  493081 pod_ready.go:81] duration metric: took 4m0.174893624s waiting for pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace to be "Ready" ...
	E0325 02:13:58.651593  493081 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-mcnqx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:13:58.651619  493081 pod_ready.go:38] duration metric: took 4m12.217935075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:13:58.651655  493081 kubeadm.go:605] restartCluster took 4m28.884588827s
	W0325 02:13:58.651809  493081 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:13:58.651858  493081 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:14:01.244870  493081 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.592993286s)
	I0325 02:14:01.244933  493081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:14:01.255147  493081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:14:01.263077  493081 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:14:01.263130  493081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:14:01.270209  493081 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:14:01.270265  493081 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:13:58.126202  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:00.126773  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:01.545822  493081 out.go:203]   - Generating certificates and keys ...
	I0325 02:14:02.632148  493081 out.go:203]   - Booting up control plane ...
	I0325 02:14:02.131694  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:04.132479  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:02.626061  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:05.126573  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:06.632419  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:09.132050  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:07.626731  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:09.628112  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:11.632228  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:14.131720  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:14.693663  493081 out.go:203]   - Configuring RBAC rules ...
	I0325 02:14:15.108043  493081 cni.go:93] Creating CNI manager for ""
	I0325 02:14:15.108073  493081 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:14:12.126217  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:14.126733  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:16.626727  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:15.110056  493081 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:14:15.110142  493081 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:14:15.114418  493081 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:14:15.114451  493081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:14:15.128351  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:14:15.832822  493081 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:14:15.832940  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:15.832966  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=embed-certs-20220325020743-262786 minikube.k8s.io/updated_at=2022_03_25T02_14_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:15.840323  493081 ops.go:34] apiserver oom_adj: -16
	I0325 02:14:15.910871  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:16.497971  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:16.997464  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:17.497696  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:16.131756  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:18.132594  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:18.626880  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:21.126074  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:17.998347  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:18.498338  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:18.997592  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:19.497831  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:19.998444  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:20.497476  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:20.997573  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:21.498032  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:21.997667  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:22.497508  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:20.633062  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:23.131946  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:23.126324  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:25.126819  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:22.997443  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:23.497582  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:23.997852  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:24.497434  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:24.997989  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:25.497608  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:25.997414  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:26.498266  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:26.998176  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:27.498400  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:27.998029  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:28.498130  493081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:14:28.554819  493081 kubeadm.go:1020] duration metric: took 12.72194646s to wait for elevateKubeSystemPrivileges.
	I0325 02:14:28.554858  493081 kubeadm.go:393] StartCluster complete in 4m58.837271595s
	I0325 02:14:28.554902  493081 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:14:28.555122  493081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:14:28.556787  493081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:14:29.075959  493081 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220325020743-262786" rescaled to 1
	I0325 02:14:29.076042  493081 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:14:29.076072  493081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:14:29.078626  493081 out.go:176] * Verifying Kubernetes components...
	I0325 02:14:29.076078  493081 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0325 02:14:29.078705  493081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:14:29.078761  493081 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220325020743-262786"
	I0325 02:14:29.078791  493081 addons.go:65] Setting dashboard=true in profile "embed-certs-20220325020743-262786"
	I0325 02:14:29.078796  493081 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220325020743-262786"
	W0325 02:14:29.078806  493081 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:14:29.078808  493081 addons.go:153] Setting addon dashboard=true in "embed-certs-20220325020743-262786"
	W0325 02:14:29.078817  493081 addons.go:165] addon dashboard should already be in state true
	I0325 02:14:29.078823  493081 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220325020743-262786"
	I0325 02:14:29.076308  493081 config.go:176] Loaded profile config "embed-certs-20220325020743-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:14:29.078845  493081 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220325020743-262786"
	I0325 02:14:29.078852  493081 host.go:66] Checking if "embed-certs-20220325020743-262786" exists ...
	I0325 02:14:29.078849  493081 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220325020743-262786"
	I0325 02:14:29.078870  493081 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220325020743-262786"
	W0325 02:14:29.078878  493081 addons.go:165] addon metrics-server should already be in state true
	I0325 02:14:29.078927  493081 host.go:66] Checking if "embed-certs-20220325020743-262786" exists ...
	I0325 02:14:29.078842  493081 host.go:66] Checking if "embed-certs-20220325020743-262786" exists ...
	I0325 02:14:29.079253  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:14:29.079474  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:14:29.079565  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:14:29.079486  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:14:29.132734  493081 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:14:29.132864  493081 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:14:29.132875  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:14:29.132934  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:14:29.134691  493081 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:14:29.138085  493081 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:14:25.632672  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:28.132568  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:29.139878  493081 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:14:29.137917  493081 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220325020743-262786"
	W0325 02:14:29.139932  493081 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:14:29.139960  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:14:29.139973  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:14:29.140051  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:14:29.139976  493081 host.go:66] Checking if "embed-certs-20220325020743-262786" exists ...
	I0325 02:14:29.138180  493081 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:14:29.140162  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:14:29.140217  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:14:29.140582  493081 cli_runner.go:133] Run: docker container inspect embed-certs-20220325020743-262786 --format={{.State.Status}}
	I0325 02:14:29.157314  493081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:14:29.159073  493081 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220325020743-262786" to be "Ready" ...
	I0325 02:14:29.180609  493081 node_ready.go:49] node "embed-certs-20220325020743-262786" has status "Ready":"True"
	I0325 02:14:29.180743  493081 node_ready.go:38] duration metric: took 21.635681ms waiting for node "embed-certs-20220325020743-262786" to be "Ready" ...
	I0325 02:14:29.180769  493081 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:14:29.188693  493081 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-2r226" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:29.202274  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:14:29.207084  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:14:29.208937  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:14:29.211469  493081 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:14:29.211496  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:14:29.211548  493081 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220325020743-262786
	I0325 02:14:29.251273  493081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49564 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220325020743-262786/id_rsa Username:docker}
	I0325 02:14:29.399874  493081 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:14:29.399902  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:14:29.400113  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:14:29.400139  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:14:29.400117  493081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:14:29.414909  493081 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:14:29.414942  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:14:29.416891  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:14:29.416914  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:14:29.489351  493081 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:14:29.489377  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:14:29.491633  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:14:29.491659  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:14:29.508615  493081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:14:29.509740  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:14:29.509762  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:14:29.596638  493081 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0325 02:14:29.599855  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:14:29.599888  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:14:29.602755  493081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:14:29.616300  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:14:29.616327  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:14:29.701693  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:14:29.701720  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:14:29.723180  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:14:29.723215  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:14:29.803797  493081 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:14:29.803841  493081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:14:29.822473  493081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:14:30.312473  493081 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220325020743-262786"
	I0325 02:14:27.626496  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:30.126312  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:30.727294  493081 out.go:176] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0325 02:14:30.727329  493081 addons.go:417] enableAddons completed in 1.651261691s
	I0325 02:14:31.202362  493081 pod_ready.go:102] pod "coredns-64897985d-2r226" in "kube-system" namespace has status "Ready":"False"
	I0325 02:14:30.631793  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:32.632303  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:34.632640  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:33.203382  493081 pod_ready.go:102] pod "coredns-64897985d-2r226" in "kube-system" namespace has status "Ready":"False"
	I0325 02:14:35.701657  493081 pod_ready.go:92] pod "coredns-64897985d-2r226" in "kube-system" namespace has status "Ready":"True"
	I0325 02:14:35.701688  493081 pod_ready.go:81] duration metric: took 6.512963191s waiting for pod "coredns-64897985d-2r226" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.701700  493081 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.706363  493081 pod_ready.go:92] pod "etcd-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 02:14:35.706386  493081 pod_ready.go:81] duration metric: took 4.678839ms waiting for pod "etcd-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.706404  493081 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.711115  493081 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 02:14:35.711135  493081 pod_ready.go:81] duration metric: took 4.72236ms waiting for pod "kube-apiserver-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.711172  493081 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.715591  493081 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 02:14:35.715613  493081 pod_ready.go:81] duration metric: took 4.424394ms waiting for pod "kube-controller-manager-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.715625  493081 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g6zfx" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.720118  493081 pod_ready.go:92] pod "kube-proxy-g6zfx" in "kube-system" namespace has status "Ready":"True"
	I0325 02:14:35.720142  493081 pod_ready.go:81] duration metric: took 4.508967ms waiting for pod "kube-proxy-g6zfx" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:35.720154  493081 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:36.099302  493081 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220325020743-262786" in "kube-system" namespace has status "Ready":"True"
	I0325 02:14:36.099329  493081 pod_ready.go:81] duration metric: took 379.165429ms waiting for pod "kube-scheduler-embed-certs-20220325020743-262786" in "kube-system" namespace to be "Ready" ...
	I0325 02:14:36.099341  493081 pod_ready.go:38] duration metric: took 6.918554086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:14:36.099365  493081 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:14:36.099419  493081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:14:36.110569  493081 api_server.go:71] duration metric: took 7.034487499s to wait for apiserver process to appear ...
	I0325 02:14:36.110606  493081 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:14:36.110620  493081 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:14:36.118302  493081 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0325 02:14:36.119573  493081 api_server.go:140] control plane version: v1.23.3
	I0325 02:14:36.119597  493081 api_server.go:130] duration metric: took 8.98482ms to wait for apiserver health ...
	I0325 02:14:36.119606  493081 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:14:36.323913  493081 system_pods.go:59] 9 kube-system pods found
	I0325 02:14:36.323952  493081 system_pods.go:61] "coredns-64897985d-2r226" [fe7a65b3-7e57-427b-9702-593a4b724f03] Running
	I0325 02:14:36.323961  493081 system_pods.go:61] "etcd-embed-certs-20220325020743-262786" [040b9c2b-bd00-4339-a262-f7d5861ceb1c] Running
	I0325 02:14:36.323967  493081 system_pods.go:61] "kindnet-4w8fx" [e38eb43f-2d94-4d55-81be-6f48dcedf5d4] Running
	I0325 02:14:36.323973  493081 system_pods.go:61] "kube-apiserver-embed-certs-20220325020743-262786" [3de7f65c-e46e-470f-807a-1efbf5733239] Running
	I0325 02:14:36.323979  493081 system_pods.go:61] "kube-controller-manager-embed-certs-20220325020743-262786" [1504e310-a6af-4113-9517-c33008e4afa9] Running
	I0325 02:14:36.323984  493081 system_pods.go:61] "kube-proxy-g6zfx" [3d594f4b-815b-457c-9c7e-3756dd107d8f] Running
	I0325 02:14:36.323989  493081 system_pods.go:61] "kube-scheduler-embed-certs-20220325020743-262786" [d825a183-ad92-465d-87b2-bc7aad73958c] Running
	I0325 02:14:36.323999  493081 system_pods.go:61] "metrics-server-b955d9d8-wphbt" [bdef9368-48ae-4991-a201-534a2b09f33a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0325 02:14:36.324012  493081 system_pods.go:61] "storage-provisioner" [5bcd44ee-6af9-4200-88bd-2b25f22804f6] Running
	I0325 02:14:36.324019  493081 system_pods.go:74] duration metric: took 204.406842ms to wait for pod list to return data ...
	I0325 02:14:36.324029  493081 default_sa.go:34] waiting for default service account to be created ...
	I0325 02:14:36.498624  493081 default_sa.go:45] found service account: "default"
	I0325 02:14:36.498652  493081 default_sa.go:55] duration metric: took 174.617951ms for default service account to be created ...
	I0325 02:14:36.498663  493081 system_pods.go:116] waiting for k8s-apps to be running ...
	I0325 02:14:36.715298  493081 system_pods.go:86] 9 kube-system pods found
	I0325 02:14:36.715341  493081 system_pods.go:89] "coredns-64897985d-2r226" [fe7a65b3-7e57-427b-9702-593a4b724f03] Running
	I0325 02:14:36.715351  493081 system_pods.go:89] "etcd-embed-certs-20220325020743-262786" [040b9c2b-bd00-4339-a262-f7d5861ceb1c] Running
	I0325 02:14:36.715358  493081 system_pods.go:89] "kindnet-4w8fx" [e38eb43f-2d94-4d55-81be-6f48dcedf5d4] Running
	I0325 02:14:36.715364  493081 system_pods.go:89] "kube-apiserver-embed-certs-20220325020743-262786" [3de7f65c-e46e-470f-807a-1efbf5733239] Running
	I0325 02:14:36.715376  493081 system_pods.go:89] "kube-controller-manager-embed-certs-20220325020743-262786" [1504e310-a6af-4113-9517-c33008e4afa9] Running
	I0325 02:14:36.715382  493081 system_pods.go:89] "kube-proxy-g6zfx" [3d594f4b-815b-457c-9c7e-3756dd107d8f] Running
	I0325 02:14:36.715392  493081 system_pods.go:89] "kube-scheduler-embed-certs-20220325020743-262786" [d825a183-ad92-465d-87b2-bc7aad73958c] Running
	I0325 02:14:36.715404  493081 system_pods.go:89] "metrics-server-b955d9d8-wphbt" [bdef9368-48ae-4991-a201-534a2b09f33a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0325 02:14:36.715421  493081 system_pods.go:89] "storage-provisioner" [5bcd44ee-6af9-4200-88bd-2b25f22804f6] Running
	I0325 02:14:36.715431  493081 system_pods.go:126] duration metric: took 216.761449ms to wait for k8s-apps to be running ...
	I0325 02:14:36.715440  493081 system_svc.go:44] waiting for kubelet service to be running ....
	I0325 02:14:36.715489  493081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:14:36.726872  493081 system_svc.go:56] duration metric: took 11.419419ms WaitForService to wait for kubelet.
	I0325 02:14:36.726906  493081 kubeadm.go:548] duration metric: took 7.650829652s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0325 02:14:36.726940  493081 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:14:36.899263  493081 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:14:36.899292  493081 node_conditions.go:123] node cpu capacity is 8
	I0325 02:14:36.899307  493081 node_conditions.go:105] duration metric: took 172.360607ms to run NodePressure ...
	I0325 02:14:36.899320  493081 start.go:213] waiting for startup goroutines ...
	I0325 02:14:36.936606  493081 start.go:499] kubectl: 1.23.5, cluster: 1.23.3 (minor skew: 0)
	I0325 02:14:36.939415  493081 out.go:176] * Done! kubectl is now configured to use "embed-certs-20220325020743-262786" cluster and "default" namespace by default
	I0325 02:14:32.127310  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:34.626485  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:36.626528  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:36.632676  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:39.131649  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:39.126798  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:41.127164  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:41.132420  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:43.632300  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:43.626585  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:46.126803  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:46.132447  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:48.132544  496534 pod_ready.go:102] pod "coredns-5644d7b6d9-trm4j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 01:57:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:14:48.127039  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:50.127543  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:52.128852  499754 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:14:52.128877  499754 node_ready.go:38] duration metric: took 4m0.010219075s waiting for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:14:52.131207  499754 out.go:176] 
	W0325 02:14:52.131355  499754 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:14:52.131371  499754 out.go:241] * 
	W0325 02:14:52.132231  499754 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	22b6127ab7f71       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   f2bb5089f8868
	d208a110372dd       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   f2bb5089f8868
	dd3e42aaf3dd8       9b7cc99821098       4 minutes ago        Running             kube-proxy                0                   b8a442f1cca90
	21482958b68c2       b07520cd7ab76       4 minutes ago        Running             kube-controller-manager   0                   f48ebb07b3e52
	bc6cf9877becc       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   083318a0382f5
	6a469f6f4de50       f40be0088a83e       4 minutes ago        Running             kube-apiserver            0                   79ca704e9271f
	c154a93ac7de2       99a3486be4f28       4 minutes ago        Running             kube-scheduler            0                   259e2071a573d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:10:08 UTC, end at Fri 2022-03-25 02:14:53 UTC. --
	Mar 25 02:10:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:32.322086308Z" level=info msg="StartContainer for \"21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73\" returns successfully"
	Mar 25 02:10:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:32.388942035Z" level=info msg="StartContainer for \"6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182\" returns successfully"
	Mar 25 02:10:50 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:50.326791726Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.694583587Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-7cpjt,Uid:6d1657ba-6fcd-4ee8-8293-b6aa0b7e1fb6,Namespace:kube-system,Attempt:0,}"
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.694628846Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-kt955,Uid:87a42b24-60b7-415b-abc9-e574262093c0,Namespace:kube-system,Attempt:0,}"
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.719059531Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1 pid=1692
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.719362815Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8a442f1cca90dc22406bd36dc233d0703f8ee66a7330470b26232862df7d507 pid=1691
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.773608172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7cpjt,Uid:6d1657ba-6fcd-4ee8-8293-b6aa0b7e1fb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8a442f1cca90dc22406bd36dc233d0703f8ee66a7330470b26232862df7d507\""
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.776342711Z" level=info msg="CreateContainer within sandbox \"b8a442f1cca90dc22406bd36dc233d0703f8ee66a7330470b26232862df7d507\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.792291137Z" level=info msg="CreateContainer within sandbox \"b8a442f1cca90dc22406bd36dc233d0703f8ee66a7330470b26232862df7d507\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b\""
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.793202582Z" level=info msg="StartContainer for \"dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b\""
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.866377844Z" level=info msg="StartContainer for \"dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b\" returns successfully"
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.987046839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-kt955,Uid:87a42b24-60b7-415b-abc9-e574262093c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\""
	Mar 25 02:10:51 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:51.989771753Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Mar 25 02:10:52 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:52.004899518Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"d208a110372dd3afe93f06ac2658cfd92f99ac83bbb21db8d077402fd5871907\""
	Mar 25 02:10:52 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:52.005438131Z" level=info msg="StartContainer for \"d208a110372dd3afe93f06ac2658cfd92f99ac83bbb21db8d077402fd5871907\""
	Mar 25 02:10:52 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:10:52.190276078Z" level=info msg="StartContainer for \"d208a110372dd3afe93f06ac2658cfd92f99ac83bbb21db8d077402fd5871907\" returns successfully"
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:13:32.529716283Z" level=info msg="shim disconnected" id=d208a110372dd3afe93f06ac2658cfd92f99ac83bbb21db8d077402fd5871907
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:13:32.529779404Z" level=warning msg="cleaning up after shim disconnected" id=d208a110372dd3afe93f06ac2658cfd92f99ac83bbb21db8d077402fd5871907 namespace=k8s.io
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:13:32.529792332Z" level=info msg="cleaning up dead shim"
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:13:32.540599245Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:13:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2086\n"
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:13:32.757578126Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:13:32.772043951Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"22b6127ab7f71c86e4615a4dc3e722fd358e082ef1371efb6d3f116104e10ef6\""
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:13:32.772450306Z" level=info msg="StartContainer for \"22b6127ab7f71c86e4615a4dc3e722fd358e082ef1371efb6d3f116104e10ef6\""
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:13:32.988219666Z" level=info msg="StartContainer for \"22b6127ab7f71c86e4615a4dc3e722fd358e082ef1371efb6d3f116104e10ef6\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220325020956-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220325020956-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_10_39_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:10:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220325020956-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:14:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:10:50 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:10:50 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:10:50 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:10:50 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220325020956-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                3d34c106-4e48-46f4-9bcf-ea4602321294
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.3
	  Kube-Proxy Version:         v1.23.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220325020956-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-kt955                                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220325020956-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220325020956-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-7cpjt                                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220325020956-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m1s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m10s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s                  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s                  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s                  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7] <==
	* {"level":"info","ts":"2022-03-25T02:10:32.415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-03-25T02:10:32.415Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-03-25T02:10:32.417Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-03-25T02:10:32.417Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-25T02:10:32.417Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-25T02:10:32.417Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-25T02:10:32.417Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220325020956-262786 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-25T02:10:33.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  02:14:53 up  4:52,  0 users,  load average: 1.25, 1.01, 1.38
	Linux default-k8s-different-port-20220325020956-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182] <==
	* I0325 02:10:35.407779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 02:10:35.407821       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 02:10:35.407883       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0325 02:10:35.410667       1 cache.go:39] Caches are synced for autoregister controller
	I0325 02:10:35.418114       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0325 02:10:35.426256       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0325 02:10:36.307036       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 02:10:36.307060       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 02:10:36.312536       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0325 02:10:36.315494       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0325 02:10:36.315514       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0325 02:10:36.735448       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 02:10:36.766331       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0325 02:10:36.903400       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0325 02:10:36.909041       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0325 02:10:36.910002       1 controller.go:611] quota admission added evaluator for: endpoints
	I0325 02:10:36.913660       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0325 02:10:37.498548       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0325 02:10:38.290755       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0325 02:10:38.299864       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0325 02:10:38.310209       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0325 02:10:43.395414       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 02:10:50.755106       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:10:51.255928       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0325 02:10:51.929935       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73] <==
	* I0325 02:10:50.352002       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0325 02:10:50.399012       1 shared_informer.go:247] Caches are synced for expand 
	I0325 02:10:50.400127       1 shared_informer.go:247] Caches are synced for attach detach 
	I0325 02:10:50.402593       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0325 02:10:50.413393       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0325 02:10:50.440429       1 shared_informer.go:247] Caches are synced for stateful set 
	I0325 02:10:50.451558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0325 02:10:50.457285       1 shared_informer.go:247] Caches are synced for namespace 
	I0325 02:10:50.498016       1 shared_informer.go:247] Caches are synced for service account 
	I0325 02:10:50.508551       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:10:50.521736       1 shared_informer.go:247] Caches are synced for disruption 
	I0325 02:10:50.521785       1 disruption.go:371] Sending events to api server.
	I0325 02:10:50.534205       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0325 02:10:50.552612       1 shared_informer.go:247] Caches are synced for deployment 
	I0325 02:10:50.555880       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:10:50.761437       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7cpjt"
	I0325 02:10:50.763623       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kt955"
	I0325 02:10:50.972353       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:10:51.015363       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:10:51.015391       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0325 02:10:51.258069       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0325 02:10:51.357575       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dpp64"
	I0325 02:10:51.362162       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9tgbz"
	I0325 02:10:51.549492       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0325 02:10:51.558391       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-dpp64"
	
	* 
	* ==> kube-proxy [dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b] <==
	* I0325 02:10:51.903633       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0325 02:10:51.903717       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0325 02:10:51.903776       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:10:51.926345       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:10:51.926371       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:10:51.926379       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:10:51.926398       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:10:51.926824       1 server.go:656] "Version info" version="v1.23.3"
	I0325 02:10:51.927429       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:10:51.927435       1 config.go:317] "Starting service config controller"
	I0325 02:10:51.927463       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:10:51.927465       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:10:52.028308       1 shared_informer.go:247] Caches are synced for service config 
	I0325 02:10:52.028348       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd] <==
	* W0325 02:10:35.393403       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:10:35.393411       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:35.393426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:10:35.393427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:35.393780       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:35.393803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:35.393937       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 02:10:35.393971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0325 02:10:35.394024       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0325 02:10:35.394054       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:10:35.394671       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:10:35.394703       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:10:35.394724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:10:35.394701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:10:35.394676       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:10:35.394772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:10:36.234022       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:10:36.234107       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:10:36.361824       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:36.361852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:36.372015       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:10:36.372056       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:10:36.495928       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0325 02:10:36.495976       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0325 02:10:38.389536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:10:08 UTC, end at Fri 2022-03-25 02:14:53 UTC. --
	Mar 25 02:12:53 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:12:53.646048    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:12:58 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:12:58.647363    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:03 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:03.648809    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:08 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:08.649484    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:13 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:13.650709    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:18 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:18.652233    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:23 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:23.653837    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:28 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:28.655420    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:32 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:13:32.755537    1287 scope.go:110] "RemoveContainer" containerID="d208a110372dd3afe93f06ac2658cfd92f99ac83bbb21db8d077402fd5871907"
	Mar 25 02:13:33 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:33.657163    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:38 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:38.658824    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:43 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:43.659553    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:48 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:48.660903    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:53 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:53.661609    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:13:58 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:13:58.662777    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:03 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:03.664007    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:08 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:08.665008    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:13 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:13.666576    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:18 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:18.668215    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:23 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:23.669201    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:28 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:28.670731    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:33 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:33.672124    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:38 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:38.673386    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:43 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:43.674593    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:14:48 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:14:48.675645    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:271: non-running pods: coredns-64897985d-9tgbz storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-9tgbz storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-9tgbz storage-provisioner: exit status 1 (58.432779ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9tgbz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-9tgbz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (297.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [90d73f0b-a0de-4f4c-a779-11be9a363bbb] Pending

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:343: "busybox" [90d73f0b-a0de-4f4c-a779-11be9a363bbb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0325 02:15:19.140414  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 02:15:19.596189  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:15:21.342345  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:15:43.464224  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: ***** TestStartStop/group/default-k8s-different-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
start_stop_delete_test.go:181: TestStartStop/group/default-k8s-different-port/serial/DeployApp: showing logs for failed pods as of 2022-03-25 02:22:55.080168695 +0000 UTC m=+3899.406297957
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 describe po busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context default-k8s-different-port-20220325020956-262786 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dwnt4 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-dwnt4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  47s (x8 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 logs busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context default-k8s-different-port-20220325020956-262786 logs busybox -n default:
start_stop_delete_test.go:181: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20220325020956-262786
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20220325020956-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4",
	        "Created": "2022-03-25T02:10:07.830065737Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:10:08.208646726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hosts",
	        "LogPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4-json.log",
	        "Name": "/default-k8s-different-port-20220325020956-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220325020956-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220325020956-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220325020956-262786",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220325020956-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220325020956-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "752dba0b0d51e54f65ae14a0ffc9beb457cc13e80db6430b791d2057b780914e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49574"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49573"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49570"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49572"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49571"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/752dba0b0d51",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220325020956-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e271f66fa8d",
	                        "default-k8s-different-port-20220325020956-262786"
	                    ],
	                    "NetworkID": "c5c0224540019d877be5e36bfc556dc0a2d83980f6e5b563be26e38eaad27a38",
	                    "EndpointID": "ded2360703a0715d75d023434cbc7944232d0b2cfe6e083bf6f1fbb0113e0018",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220325020956-262786 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20220325020956-262786 logs -n 25: (1.007899998s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | bridge-20220325014920-262786                     | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:53 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | bridge-20220325014920-262786                               |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220325020956-262786      | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:56 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | disable-driver-mounts-20220325020956-262786                |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:14:36 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:47 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:48 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:49 UTC | Fri, 25 Mar 2022 02:14:50 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:52 UTC | Fri, 25 Mar 2022 02:14:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:51 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:23 UTC | Fri, 25 Mar 2022 02:16:24 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:24 UTC | Fri, 25 Mar 2022 02:16:25 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:25 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:35 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:46 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:47 UTC | Fri, 25 Mar 2022 02:16:48 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:48 UTC | Fri, 25 Mar 2022 02:16:51 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:51 UTC | Fri, 25 Mar 2022 02:16:52 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:19:35 UTC | Fri, 25 Mar 2022 02:19:36 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:16:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:16:35.482311  519649 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:16:35.482451  519649 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:35.482462  519649 out.go:310] Setting ErrFile to fd 2...
	I0325 02:16:35.482467  519649 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:35.482575  519649 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:16:35.482813  519649 out.go:304] Setting JSON to false
	I0325 02:16:35.484309  519649 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17668,"bootTime":1648156928,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:16:35.484382  519649 start.go:125] virtualization: kvm guest
	I0325 02:16:35.487068  519649 out.go:176] * [no-preload-20220325020326-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:16:35.488730  519649 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:16:35.487298  519649 notify.go:193] Checking for updates...
	I0325 02:16:35.490311  519649 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:16:35.491877  519649 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:35.493486  519649 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:16:35.495057  519649 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:16:35.496266  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:35.497491  519649 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:16:35.540694  519649 docker.go:136] docker version: linux-20.10.14
	I0325 02:16:35.540841  519649 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:35.641548  519649 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:35.575580325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:16:35.641678  519649 docker.go:253] overlay module found
	I0325 02:16:35.644240  519649 out.go:176] * Using the docker driver based on existing profile
	I0325 02:16:35.644293  519649 start.go:284] selected driver: docker
	I0325 02:16:35.644302  519649 start.go:801] validating driver "docker" against &{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:35.644458  519649 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:16:35.644501  519649 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:35.644530  519649 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:35.646030  519649 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:35.646742  519649 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:35.752278  519649 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:35.682730162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:16:35.752465  519649 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:35.752492  519649 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:35.754658  519649 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:35.754778  519649 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:16:35.754810  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:16:35.754821  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:35.754840  519649 start_flags.go:304] config:
	{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:35.756791  519649 out.go:176] * Starting control plane node no-preload-20220325020326-262786 in cluster no-preload-20220325020326-262786
	I0325 02:16:35.756829  519649 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:16:35.758358  519649 out.go:176] * Pulling base image ...
	I0325 02:16:35.758390  519649 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:35.758492  519649 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:16:35.758563  519649 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:16:35.758688  519649 cache.go:107] acquiring lock: {Name:mkadc5033eb4d9179acd1c6e7ff0e25d4981568c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758710  519649 cache.go:107] acquiring lock: {Name:mk0987b0339865c5416a6746bce8670ad78c0a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758707  519649 cache.go:107] acquiring lock: {Name:mkdc6a82c5ad28a9b97463884b87944eaef2fef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758830  519649 cache.go:107] acquiring lock: {Name:mk140b8e2c06d387b642b813a7efd82a9f19d6c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758829  519649 cache.go:107] acquiring lock: {Name:mk8ed79f1ecf0bc83b0d3ead06534032f65db356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758880  519649 cache.go:107] acquiring lock: {Name:mkcb4c0577b6fb6a4cc15cd1cfc04742789dcc24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758920  519649 cache.go:107] acquiring lock: {Name:mk1134717661547774a1dd6d6e2854162646543d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758911  519649 cache.go:107] acquiring lock: {Name:mk61dd10aefdeb5283d07e3024688797852e36d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759022  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0325 02:16:35.759030  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 exists
	I0325 02:16:35.759047  519649 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 372.469µs
	I0325 02:16:35.759047  519649 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0" took 131.834µs
	I0325 02:16:35.759061  519649 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0325 02:16:35.759064  519649 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 succeeded
	I0325 02:16:35.758904  519649 cache.go:107] acquiring lock: {Name:mkcf6d57389d13d4e31240b1cdf9af5455cf82f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759073  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 exists
	I0325 02:16:35.759078  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0325 02:16:35.759099  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0325 02:16:35.759090  519649 cache.go:107] acquiring lock: {Name:mkd382d09a068cdb98cdc085f7d3d174faef8f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759109  519649 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 210.056µs
	I0325 02:16:35.759116  519649 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0325 02:16:35.759104  519649 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 350.331µs
	I0325 02:16:35.759086  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0325 02:16:35.759124  519649 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0325 02:16:35.759102  519649 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0" took 354.111µs
	I0325 02:16:35.759149  519649 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759143  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 exists
	I0325 02:16:35.759144  519649 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 439.796µs
	I0325 02:16:35.759168  519649 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0325 02:16:35.759127  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0325 02:16:35.759167  519649 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0" took 339.705µs
	I0325 02:16:35.759178  519649 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759105  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0325 02:16:35.759188  519649 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 362.557µs
	I0325 02:16:35.759203  519649 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0325 02:16:35.759199  519649 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 504.454µs
	I0325 02:16:35.759217  519649 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0325 02:16:35.759228  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 exists
	I0325 02:16:35.759276  519649 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0" took 279.744µs
	I0325 02:16:35.759305  519649 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759331  519649 cache.go:87] Successfully saved all images to host disk.
	I0325 02:16:35.794208  519649 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:16:35.794250  519649 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:16:35.794266  519649 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:16:35.794300  519649 start.go:348] acquiring machines lock for no-preload-20220325020326-262786: {Name:mk0b68e00c1687cd51ada59f78a2181cd58687dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.794388  519649 start.go:352] acquired machines lock for "no-preload-20220325020326-262786" in 69.622µs
	I0325 02:16:35.794408  519649 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:16:35.794412  519649 fix.go:55] fixHost starting: 
	I0325 02:16:35.794639  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:16:35.829675  519649 fix.go:108] recreateIfNeeded on no-preload-20220325020326-262786: state=Stopped err=<nil>
	W0325 02:16:35.829710  519649 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:16:30.919166  516439 api_server.go:165] Checking apiserver status ...
	I0325 02:16:30.919257  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:30.927996  516439 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:30.928016  516439 api_server.go:165] Checking apiserver status ...
	I0325 02:16:30.928054  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:30.936308  516439 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:30.936337  516439 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:16:30.936344  516439 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:16:30.936355  516439 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:16:30.936402  516439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:30.961816  516439 cri.go:87] found id: "e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8"
	I0325 02:16:30.961847  516439 cri.go:87] found id: "c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5"
	I0325 02:16:30.961853  516439 cri.go:87] found id: "a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966"
	I0325 02:16:30.961869  516439 cri.go:87] found id: "576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031"
	I0325 02:16:30.961874  516439 cri.go:87] found id: "016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca"
	I0325 02:16:30.961880  516439 cri.go:87] found id: "74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8"
	I0325 02:16:30.961885  516439 cri.go:87] found id: ""
	I0325 02:16:30.961891  516439 cri.go:232] Stopping containers: [e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8 c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5 a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966 576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031 016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca 74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8]
	I0325 02:16:30.961942  516439 ssh_runner.go:195] Run: which crictl
	I0325 02:16:30.965080  516439 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8 c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5 a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966 576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031 016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca 74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8
	I0325 02:16:30.990650  516439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:16:31.001312  516439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:16:31.009030  516439 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 25 02:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 25 02:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Mar 25 02:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Mar 25 02:15 /etc/kubernetes/scheduler.conf
	
	I0325 02:16:31.009104  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:16:31.016238  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:16:31.022869  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:16:31.029565  516439 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:31.029621  516439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:16:31.036474  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:16:31.043067  516439 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:31.043125  516439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:16:31.049642  516439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:31.056883  516439 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:31.056914  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.101487  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.789161  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.922185  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.984722  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:32.028325  516439 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:32.028393  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:32.537756  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:33.037616  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:33.537339  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:34.037634  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:34.537880  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.037295  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.538072  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.968327  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:37.968941  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:35.833187  519649 out.go:176] * Restarting existing docker container for "no-preload-20220325020326-262786" ...
	I0325 02:16:35.833270  519649 cli_runner.go:133] Run: docker start no-preload-20220325020326-262786
	I0325 02:16:36.223867  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:16:36.260748  519649 kic.go:420] container "no-preload-20220325020326-262786" state is running.
	I0325 02:16:36.261158  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:36.295907  519649 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:16:36.296110  519649 machine.go:88] provisioning docker machine ...
	I0325 02:16:36.296134  519649 ubuntu.go:169] provisioning hostname "no-preload-20220325020326-262786"
	I0325 02:16:36.296174  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:36.331323  519649 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:36.331546  519649 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49589 <nil> <nil>}
	I0325 02:16:36.331564  519649 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20220325020326-262786 && echo "no-preload-20220325020326-262786" | sudo tee /etc/hostname
	I0325 02:16:36.332175  519649 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50526->127.0.0.1:49589: read: connection reset by peer
	I0325 02:16:39.464533  519649 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20220325020326-262786
	
	I0325 02:16:39.464619  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:39.500131  519649 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:39.500311  519649 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49589 <nil> <nil>}
	I0325 02:16:39.500341  519649 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220325020326-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220325020326-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220325020326-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:16:39.619029  519649 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:16:39.619064  519649 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:16:39.619085  519649 ubuntu.go:177] setting up certificates
	I0325 02:16:39.619100  519649 provision.go:83] configureAuth start
	I0325 02:16:39.619161  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:39.653347  519649 provision.go:138] copyHostCerts
	I0325 02:16:39.653407  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:16:39.653421  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:16:39.653484  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:16:39.653581  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:16:39.653592  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:16:39.653616  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:16:39.653673  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:16:39.653687  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:16:39.653707  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:16:39.653765  519649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220325020326-262786 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220325020326-262786]
	I0325 02:16:39.955829  519649 provision.go:172] copyRemoteCerts
	I0325 02:16:39.955898  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:16:39.955933  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:39.989898  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.079856  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 02:16:40.099567  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:16:40.119824  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0325 02:16:40.140874  519649 provision.go:86] duration metric: configureAuth took 521.759605ms
	I0325 02:16:40.140906  519649 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:16:40.141163  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:40.141185  519649 machine.go:91] provisioned docker machine in 3.845060196s
	I0325 02:16:40.141193  519649 start.go:302] post-start starting for "no-preload-20220325020326-262786" (driver="docker")
	I0325 02:16:40.141201  519649 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:16:40.141260  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:16:40.141308  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.180699  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.271442  519649 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:16:40.274944  519649 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:16:40.275028  519649 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:16:40.275041  519649 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:16:40.275051  519649 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:16:40.275064  519649 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:16:40.275115  519649 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:16:40.275176  519649 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:16:40.275263  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:16:40.282729  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:40.301545  519649 start.go:305] post-start completed in 160.334219ms
	I0325 02:16:40.301629  519649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:16:40.301692  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.340243  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.427579  519649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:16:40.431311  519649 fix.go:57] fixHost completed within 4.636891748s
	I0325 02:16:40.431332  519649 start.go:81] releasing machines lock for "no-preload-20220325020326-262786", held for 4.636932836s
	I0325 02:16:40.431419  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:40.471929  519649 ssh_runner.go:195] Run: systemctl --version
	I0325 02:16:40.471972  519649 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:16:40.471994  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.472031  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:36.038098  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:36.537401  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:37.037404  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:37.537180  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:38.037556  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:38.099215  516439 api_server.go:71] duration metric: took 6.070889838s to wait for apiserver process to appear ...
	I0325 02:16:38.099286  516439 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:16:38.099301  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:38.099706  516439 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0325 02:16:38.600314  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:41.706206  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:16:41.706241  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:16:42.100667  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:42.105436  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:16:42.105478  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:16:42.599961  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:42.605081  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:16:42.605109  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:16:43.100711  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:43.105895  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0325 02:16:43.112809  516439 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:16:43.112833  516439 api_server.go:130] duration metric: took 5.013539931s to wait for apiserver health ...
	I0325 02:16:43.112846  516439 cni.go:93] Creating CNI manager for ""
	I0325 02:16:43.112855  516439 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:43.115000  516439 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:16:43.115081  516439 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:16:43.119112  516439 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:16:43.119136  516439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:16:43.132304  516439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:16:43.929421  516439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:16:43.937528  516439 system_pods.go:59] 9 kube-system pods found
	I0325 02:16:43.937572  516439 system_pods.go:61] "coredns-64897985d-p65tg" [e65563a2-916d-4e4f-9899-45abcf6e43e6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937583  516439 system_pods.go:61] "etcd-newest-cni-20220325021454-262786" [301b74c1-25bb-412c-8781-5b02da9c4093] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:16:43.937590  516439 system_pods.go:61] "kindnet-td766" [40872158-4184-4df2-ae83-e42d228b4223] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:16:43.937600  516439 system_pods.go:61] "kube-apiserver-newest-cni-20220325021454-262786" [d2e43879-332a-448a-97c5-1a2bea717597] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:16:43.937605  516439 system_pods.go:61] "kube-controller-manager-newest-cni-20220325021454-262786" [8af92ea2-d71d-4620-ac1c-594d1cf3cd2b] Running
	I0325 02:16:43.937612  516439 system_pods.go:61] "kube-proxy-fj7dd" [1af095d5-b04f-4be9-bd3b-e2c7a2b373b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:16:43.937621  516439 system_pods.go:61] "kube-scheduler-newest-cni-20220325021454-262786" [33a2b8ac-d72f-4399-971a-38f587c9994c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:16:43.937627  516439 system_pods.go:61] "metrics-server-b955d9d8-sbk6n" [80ba7292-f3cd-4e79-88b4-6e9f5d1e738e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937636  516439 system_pods.go:61] "storage-provisioner" [28ecf9b3-cf1c-495e-a39e-8fe37150d662] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937642  516439 system_pods.go:74] duration metric: took 8.196027ms to wait for pod list to return data ...
	I0325 02:16:43.937652  516439 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:16:43.940863  516439 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:16:43.940894  516439 node_conditions.go:123] node cpu capacity is 8
	I0325 02:16:43.940904  516439 node_conditions.go:105] duration metric: took 3.247281ms to run NodePressure ...
	I0325 02:16:43.940927  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:44.087258  516439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:16:44.094685  516439 ops.go:34] apiserver oom_adj: -16
	I0325 02:16:44.094721  516439 kubeadm.go:605] restartCluster took 16.202985802s
	I0325 02:16:44.094732  516439 kubeadm.go:393] StartCluster complete in 16.248550193s
	I0325 02:16:44.094758  516439 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:44.094885  516439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:44.096265  516439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:44.101456  516439 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220325021454-262786" rescaled to 1
	I0325 02:16:44.101529  516439 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:16:44.103443  516439 out.go:176] * Verifying Kubernetes components...
	I0325 02:16:44.103511  516439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:16:44.101558  516439 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0325 02:16:44.103612  516439 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103628  516439 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103636  516439 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.103642  516439 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:16:44.103644  516439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220325021454-262786"
	I0325 02:16:44.103659  516439 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103685  516439 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220325021454-262786"
	I0325 02:16:44.103693  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	W0325 02:16:44.103700  516439 addons.go:165] addon metrics-server should already be in state true
	I0325 02:16:44.103616  516439 addons.go:65] Setting dashboard=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103733  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:44.103732  516439 addons.go:153] Setting addon dashboard=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.103905  516439 addons.go:165] addon dashboard should already be in state true
	I0325 02:16:44.103988  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:44.104010  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.101542  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:16:44.104212  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.101745  516439 config.go:176] Loaded profile config "newest-cni-20220325021454-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:44.104241  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.104495  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.121208  516439 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:44.121280  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:44.155459  516439 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:16:44.155647  516439 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:16:44.155665  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:16:44.155751  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.162366  516439 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:16:44.163990  516439 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:16:44.161446  516439 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.164031  516439 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:16:44.164070  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:16:44.164081  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:16:44.164083  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:40.468819  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:42.968016  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:44.165737  516439 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:16:44.164138  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.165834  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:16:44.165852  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:16:44.164608  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.165907  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.192351  516439 api_server.go:71] duration metric: took 90.77915ms to wait for apiserver process to appear ...
	I0325 02:16:44.192383  516439 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:16:44.192398  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:44.192396  516439 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0325 02:16:44.198241  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0325 02:16:44.199343  516439 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:16:44.199364  516439 api_server.go:130] duration metric: took 6.9739ms to wait for apiserver health ...
	I0325 02:16:44.199376  516439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:16:44.203708  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.209623  516439 system_pods.go:59] 9 kube-system pods found
	I0325 02:16:44.209665  516439 system_pods.go:61] "coredns-64897985d-p65tg" [e65563a2-916d-4e4f-9899-45abcf6e43e6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209676  516439 system_pods.go:61] "etcd-newest-cni-20220325021454-262786" [301b74c1-25bb-412c-8781-5b02da9c4093] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:16:44.209686  516439 system_pods.go:61] "kindnet-td766" [40872158-4184-4df2-ae83-e42d228b4223] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:16:44.209706  516439 system_pods.go:61] "kube-apiserver-newest-cni-20220325021454-262786" [d2e43879-332a-448a-97c5-1a2bea717597] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:16:44.209719  516439 system_pods.go:61] "kube-controller-manager-newest-cni-20220325021454-262786" [8af92ea2-d71d-4620-ac1c-594d1cf3cd2b] Running
	I0325 02:16:44.209734  516439 system_pods.go:61] "kube-proxy-fj7dd" [1af095d5-b04f-4be9-bd3b-e2c7a2b373b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:16:44.209784  516439 system_pods.go:61] "kube-scheduler-newest-cni-20220325021454-262786" [33a2b8ac-d72f-4399-971a-38f587c9994c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:16:44.209802  516439 system_pods.go:61] "metrics-server-b955d9d8-sbk6n" [80ba7292-f3cd-4e79-88b4-6e9f5d1e738e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209812  516439 system_pods.go:61] "storage-provisioner" [28ecf9b3-cf1c-495e-a39e-8fe37150d662] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209818  516439 system_pods.go:74] duration metric: took 10.436764ms to wait for pod list to return data ...
	I0325 02:16:44.209858  516439 default_sa.go:34] waiting for default service account to be created ...
	I0325 02:16:44.215792  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.216231  516439 default_sa.go:45] found service account: "default"
	I0325 02:16:44.216319  516439 default_sa.go:55] duration metric: took 6.410246ms for default service account to be created ...
	I0325 02:16:44.216344  516439 kubeadm.go:548] duration metric: took 114.781757ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0325 02:16:44.216396  516439 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:16:44.219134  516439 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:16:44.219161  516439 node_conditions.go:123] node cpu capacity is 8
	I0325 02:16:44.219175  516439 node_conditions.go:105] duration metric: took 2.773273ms to run NodePressure ...
	I0325 02:16:44.219210  516439 start.go:213] waiting for startup goroutines ...
	I0325 02:16:44.221833  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.222359  516439 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:16:44.222381  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:16:44.222432  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.261798  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.319771  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:16:44.319803  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:16:44.319846  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:16:44.321101  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:16:44.321125  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:16:44.334351  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:16:44.334375  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:16:44.334647  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:16:44.334666  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:16:44.349057  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:16:44.349094  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:16:44.349070  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:16:44.349161  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:16:44.389276  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:16:44.392743  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:16:44.393530  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:16:44.393550  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:16:44.410521  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:16:44.410552  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:16:44.496572  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:16:44.496606  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:16:44.515360  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:16:44.515405  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:16:44.600692  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:16:44.600722  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:16:44.688604  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:16:44.688635  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:16:44.707599  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:16:44.889028  516439 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220325021454-262786"
	I0325 02:16:45.068498  516439 out.go:176] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0325 02:16:45.068530  516439 addons.go:417] enableAddons completed in 966.974309ms
	I0325 02:16:45.105519  516439 start.go:499] kubectl: 1.23.5, cluster: 1.23.4-rc.0 (minor skew: 0)
	I0325 02:16:45.107876  516439 out.go:176] * Done! kubectl is now configured to use "newest-cni-20220325021454-262786" cluster and "default" namespace by default
	I0325 02:16:40.514344  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.516013  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.624849  519649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:16:40.637160  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:16:40.647198  519649 docker.go:183] disabling docker service ...
	I0325 02:16:40.647293  519649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:16:40.657506  519649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:16:40.667205  519649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:16:40.756526  519649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:16:40.838425  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:16:40.849201  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:16:40.862764  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:16:40.877296  519649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:16:40.884604  519649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:16:40.891942  519649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:16:40.968097  519649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:16:41.042195  519649 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:16:41.042340  519649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:16:41.046206  519649 start.go:462] Will wait 60s for crictl version
	I0325 02:16:41.046277  519649 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:41.069914  519649 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:16:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:16:44.968453  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:47.468552  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.117787  519649 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:52.144102  519649 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:16:52.144170  519649 ssh_runner.go:195] Run: containerd --version
	I0325 02:16:52.168021  519649 ssh_runner.go:195] Run: containerd --version
	I0325 02:16:52.192255  519649 out.go:176] * Preparing Kubernetes v1.23.4-rc.0 on containerd 1.5.10 ...
	I0325 02:16:52.192348  519649 cli_runner.go:133] Run: docker network inspect no-preload-20220325020326-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:16:52.228171  519649 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0325 02:16:52.231817  519649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:16:49.968284  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.467868  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:54.468272  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.244329  519649 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:16:52.244416  519649 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:52.244468  519649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:16:52.271321  519649 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:16:52.271344  519649 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:16:52.271385  519649 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:16:52.298329  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:16:52.298360  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:52.298373  519649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:16:52.298389  519649 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220325020326-262786 NodeName:no-preload-20220325020326-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:16:52.298577  519649 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220325020326-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:16:52.298682  519649 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220325020326-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 02:16:52.298747  519649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4-rc.0
	I0325 02:16:52.306846  519649 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:16:52.306918  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:16:52.315084  519649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 02:16:52.328704  519649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0325 02:16:52.342299  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0325 02:16:52.355577  519649 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:16:52.358463  519649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:16:52.367826  519649 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786 for IP: 192.168.67.2
	I0325 02:16:52.367934  519649 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:16:52.367989  519649 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:16:52.368051  519649 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.key
	I0325 02:16:52.368101  519649 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key.c7fa3a9e
	I0325 02:16:52.368132  519649 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key
	I0325 02:16:52.368232  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:16:52.368263  519649 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:16:52.368275  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:16:52.368299  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:16:52.368335  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:16:52.368357  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:16:52.368397  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:52.368977  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:16:52.386350  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 02:16:52.404078  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:16:52.422535  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:16:52.441293  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:16:52.458689  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:16:52.476708  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:16:52.494410  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:16:52.511769  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:16:52.529287  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:16:52.546092  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:16:52.562842  519649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:16:52.574641  519649 ssh_runner.go:195] Run: openssl version
	I0325 02:16:52.579369  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:16:52.586915  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.590088  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.590144  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.595082  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:16:52.601804  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:16:52.608863  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.611860  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.611906  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.616573  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:16:52.622899  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:16:52.629919  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.632815  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.632859  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.637417  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:16:52.644239  519649 kubeadm.go:391] StartCluster: {Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:52.644354  519649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:16:52.644394  519649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:52.669210  519649 cri.go:87] found id: "e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	I0325 02:16:52.669242  519649 cri.go:87] found id: "0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc"
	I0325 02:16:52.669249  519649 cri.go:87] found id: "ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252"
	I0325 02:16:52.669254  519649 cri.go:87] found id: "fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b"
	I0325 02:16:52.669270  519649 cri.go:87] found id: "e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710"
	I0325 02:16:52.669279  519649 cri.go:87] found id: "b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244"
	I0325 02:16:52.669283  519649 cri.go:87] found id: ""
	I0325 02:16:52.669324  519649 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:16:52.683722  519649 cri.go:114] JSON = null
	W0325 02:16:52.683785  519649 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0325 02:16:52.683838  519649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:16:52.690850  519649 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:16:52.690872  519649 kubeadm.go:601] restartCluster start
	I0325 02:16:52.690912  519649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:16:52.697516  519649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:52.698228  519649 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220325020326-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:52.698600  519649 kubeconfig.go:127] "no-preload-20220325020326-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:16:52.699273  519649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:52.700696  519649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:16:52.707667  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:52.707717  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:52.715666  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:52.916102  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:52.916184  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:52.925481  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.116769  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.116855  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.125381  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.316671  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.316772  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.325189  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.516483  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.516581  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.525793  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.716104  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.716183  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.724648  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.915849  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.915940  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.924616  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.115776  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.115861  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.124538  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.316714  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.316801  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.325601  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.515836  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.515913  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.524158  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.716463  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.716549  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.725607  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.915823  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.915903  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.924487  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.116802  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.116901  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.126160  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.316446  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.316526  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.324891  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:56.468419  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:58.968213  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:55.516554  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.516656  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.525265  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.716429  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.716509  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.725617  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.725645  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.725683  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.733139  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.733164  519649 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:16:55.733174  519649 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:16:55.733193  519649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:16:55.733247  519649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:55.758794  519649 cri.go:87] found id: "e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	I0325 02:16:55.758826  519649 cri.go:87] found id: "0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc"
	I0325 02:16:55.758835  519649 cri.go:87] found id: "ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252"
	I0325 02:16:55.758843  519649 cri.go:87] found id: "fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b"
	I0325 02:16:55.758852  519649 cri.go:87] found id: "e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710"
	I0325 02:16:55.758860  519649 cri.go:87] found id: "b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244"
	I0325 02:16:55.758867  519649 cri.go:87] found id: ""
	I0325 02:16:55.758874  519649 cri.go:232] Stopping containers: [e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252 fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710 b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244]
	I0325 02:16:55.758928  519649 ssh_runner.go:195] Run: which crictl
	I0325 02:16:55.762024  519649 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252 fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710 b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244
	I0325 02:16:55.786603  519649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:16:55.796385  519649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:16:55.803085  519649 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 25 02:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Mar 25 02:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:03 /etc/kubernetes/scheduler.conf
	
	I0325 02:16:55.803151  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:16:55.809939  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:16:55.816507  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:16:55.822744  519649 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.822807  519649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:16:55.828985  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:16:55.835918  519649 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.835967  519649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:16:55.843105  519649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:55.850384  519649 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:55.850419  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:55.893825  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.667540  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.802771  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.854899  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.922247  519649 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:56.922327  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:57.431777  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:57.932218  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:58.431927  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:58.931629  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:59.432174  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:59.932237  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:00.431697  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:00.968915  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:03.468075  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:00.932213  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:01.431617  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:01.931744  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.431861  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.931562  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.996665  519649 api_server.go:71] duration metric: took 6.074430006s to wait for apiserver process to appear ...
	I0325 02:17:02.996706  519649 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:17:02.996721  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:02.997178  519649 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0325 02:17:03.497954  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:06.096426  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:17:06.096466  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:17:06.497872  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:06.502718  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:17:06.502746  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:17:06.998348  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:07.002908  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:17:07.002934  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:17:07.497481  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:07.502551  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0325 02:17:07.508747  519649 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:17:07.508776  519649 api_server.go:130] duration metric: took 4.512062997s to wait for apiserver health ...
	I0325 02:17:07.508793  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:17:07.508800  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:17:05.468506  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:07.968498  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:07.511699  519649 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:17:07.511795  519649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:17:07.515865  519649 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:17:07.515896  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:17:07.530511  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:17:08.432775  519649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:17:08.439909  519649 system_pods.go:59] 9 kube-system pods found
	I0325 02:17:08.439946  519649 system_pods.go:61] "coredns-64897985d-b9827" [29b80e2f-89fe-4b4a-a931-333a59535d4c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.439962  519649 system_pods.go:61] "etcd-no-preload-20220325020326-262786" [add71311-f324-4612-b981-ca42b0ef813c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:17:08.439971  519649 system_pods.go:61] "kindnet-nhlsm" [57939cf7-016c-486a-8a08-466ff1515c1f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:17:08.439977  519649 system_pods.go:61] "kube-apiserver-no-preload-20220325020326-262786" [f9b1f749-8d63-446e-bd36-152e849a5bf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:17:08.439990  519649 system_pods.go:61] "kube-controller-manager-no-preload-20220325020326-262786" [a229a2c1-6ed0-434a-8b3c-7951beee3fe0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:17:08.439994  519649 system_pods.go:61] "kube-proxy-l6tg2" [f41c6b8d-0d57-4096-af80-8e9a7da29b60] Running
	I0325 02:17:08.440003  519649 system_pods.go:61] "kube-scheduler-no-preload-20220325020326-262786" [a41de5aa-8f3c-46cd-bc8e-85c035c31512] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:17:08.440012  519649 system_pods.go:61] "metrics-server-b955d9d8-dzczk" [5c06ad70-f575-44ee-8a14-d4d2b172ccf2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.440019  519649 system_pods.go:61] "storage-provisioner" [d778a38b-7ebf-4a50-956a-6628a9055852] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.440027  519649 system_pods.go:74] duration metric: took 7.223437ms to wait for pod list to return data ...
	I0325 02:17:08.440037  519649 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:17:08.443080  519649 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:17:08.443104  519649 node_conditions.go:123] node cpu capacity is 8
	I0325 02:17:08.443116  519649 node_conditions.go:105] duration metric: took 3.071905ms to run NodePressure ...
	I0325 02:17:08.443134  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:17:08.590505  519649 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:17:08.611319  519649 kubeadm.go:752] kubelet initialised
	I0325 02:17:08.611346  519649 kubeadm.go:753] duration metric: took 20.794737ms waiting for restarted kubelet to initialise ...
	I0325 02:17:08.611354  519649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:17:08.617229  519649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" ...
	I0325 02:17:09.968693  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:12.468173  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:10.623188  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:13.123899  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:14.968191  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:17.468172  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:15.623504  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:18.123637  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:19.968292  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:22.468166  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:20.623486  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:22.624740  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:24.625363  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:24.968021  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:27.468041  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:29.468565  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:27.123366  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:29.123949  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:31.968178  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:33.968823  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:31.623836  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:34.123164  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:36.468695  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:38.967993  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:36.123971  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:38.623418  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:41.468821  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:43.968154  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:40.623650  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:43.124505  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:45.968404  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:47.968532  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:45.624087  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:48.123363  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:50.468244  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:52.468797  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:54.468960  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:50.623592  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:52.624829  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:55.124055  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:56.968701  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:59.467918  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:57.623248  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:59.623684  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:01.468256  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:03.967939  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:01.623899  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:04.123560  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:05.968665  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:08.467884  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:06.124019  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:08.623070  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:10.468279  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:12.468416  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:11.123374  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:13.623289  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:14.967919  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:16.968150  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:19.468065  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:15.623672  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:18.124412  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:21.468475  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:23.968850  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:20.624197  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:23.123807  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:25.124272  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:26.468766  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:28.968612  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:27.624274  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:30.123559  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:30.968779  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:33.468295  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:32.623099  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:34.623275  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:35.468741  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:37.968661  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:36.623368  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:38.623990  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:40.468313  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:42.468818  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:40.624162  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:43.123758  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:44.968325  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:47.468369  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:45.623667  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:47.623731  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:50.123654  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:49.968304  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:51.968856  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:54.468654  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:52.623485  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:54.623818  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:56.968573  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:58.968977  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:57.123496  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:59.124157  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:01.470174  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:03.968282  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:01.623917  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:04.123410  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:05.968412  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:07.968843  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:06.124235  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:08.124325  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:10.467818  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:12.468731  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:10.623795  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:13.123199  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:15.124279  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:14.967929  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:16.968185  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:18.968894  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:17.623867  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:20.124329  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:21.468097  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:23.468504  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:22.622920  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:24.623325  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:25.968086  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:28.467817  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:26.623622  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:29.123797  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:30.467966  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:32.967797  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:34.470135  496534 node_ready.go:38] duration metric: took 4m0.008592307s waiting for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 02:19:34.472535  496534 out.go:176] 
	W0325 02:19:34.472693  496534 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:19:34.472714  496534 out.go:241] * 
	W0325 02:19:34.473654  496534 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:19:31.124139  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:33.623203  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:35.623380  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:37.623882  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:40.123633  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:42.124935  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:44.622980  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:46.623461  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:48.623960  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:51.123453  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:53.124042  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:55.623040  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:57.623769  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:59.624128  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:02.123372  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:04.623176  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:06.623948  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:09.123779  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:11.124054  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:13.624042  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:16.123406  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:18.124039  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:20.124112  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:22.623270  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:24.623999  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:27.123242  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:29.124370  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:31.124412  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:33.623358  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:36.123946  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:38.124288  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:40.623896  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:43.123554  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:45.124025  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:47.623811  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:50.123422  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:52.123897  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:54.124053  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:56.624021  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:59.123559  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:01.124195  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:03.623329  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:05.623709  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:08.123740  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:08.621002  519649 pod_ready.go:81] duration metric: took 4m0.003733568s waiting for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" ...
	E0325 02:21:08.621038  519649 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:21:08.621065  519649 pod_ready.go:38] duration metric: took 4m0.009701445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:21:08.621095  519649 kubeadm.go:605] restartCluster took 4m15.930218796s
	W0325 02:21:08.621264  519649 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:21:08.621308  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:21:10.388277  519649 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.766939087s)
	I0325 02:21:10.388356  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:21:10.397928  519649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:21:10.405143  519649 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:21:10.405196  519649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:21:10.412369  519649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:21:10.412423  519649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:21:23.839012  519649 out.go:203]   - Generating certificates and keys ...
	I0325 02:21:23.841551  519649 out.go:203]   - Booting up control plane ...
	I0325 02:21:23.844819  519649 out.go:203]   - Configuring RBAC rules ...
	I0325 02:21:23.846446  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:21:23.846463  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:21:23.848159  519649 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:21:23.848260  519649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:21:23.851792  519649 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:21:23.851811  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:21:23.864694  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:21:24.545001  519649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:21:24.545086  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:24.545087  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=no-preload-20220325020326-262786 minikube.k8s.io/updated_at=2022_03_25T02_21_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:24.552352  519649 ops.go:34] apiserver oom_adj: -16
	I0325 02:21:24.617795  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:25.174278  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:25.675236  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:26.175029  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:26.674497  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:27.174775  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:27.674258  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:28.174824  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:28.674646  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:29.174252  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:29.675260  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:30.175187  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:30.674792  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:31.174185  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:31.674250  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:32.174501  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:32.675112  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:33.174579  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:33.674182  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:34.174816  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:34.674733  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:35.174444  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:35.675064  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:36.174387  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:36.674259  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.174753  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.675061  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.741138  519649 kubeadm.go:1020] duration metric: took 13.196118254s to wait for elevateKubeSystemPrivileges.
	I0325 02:21:37.741171  519649 kubeadm.go:393] StartCluster complete in 4m45.096948299s
	I0325 02:21:37.741190  519649 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:21:37.741314  519649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:21:37.742545  519649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:21:38.259722  519649 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220325020326-262786" rescaled to 1
	I0325 02:21:38.259791  519649 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:21:38.261749  519649 out.go:176] * Verifying Kubernetes components...
	I0325 02:21:38.259824  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:21:38.261828  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:21:38.259842  519649 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:21:38.261923  519649 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.261953  519649 addons.go:65] Setting metrics-server=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.261962  519649 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220325020326-262786"
	I0325 02:21:38.261965  519649 addons.go:153] Setting addon metrics-server=true in "no-preload-20220325020326-262786"
	I0325 02:21:38.261933  519649 addons.go:65] Setting dashboard=true in profile "no-preload-20220325020326-262786"
	W0325 02:21:38.261977  519649 addons.go:165] addon metrics-server should already be in state true
	I0325 02:21:38.262018  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	W0325 02:21:38.261970  519649 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:21:38.262134  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.261981  519649 addons.go:153] Setting addon dashboard=true in "no-preload-20220325020326-262786"
	W0325 02:21:38.262196  519649 addons.go:165] addon dashboard should already be in state true
	I0325 02:21:38.262244  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.261943  519649 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.262309  519649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220325020326-262786"
	I0325 02:21:38.260052  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:21:38.262573  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262579  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262610  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262698  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.272707  519649 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:21:38.320596  519649 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:21:38.320821  519649 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:21:38.320836  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:21:38.320907  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.321013  519649 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220325020326-262786"
	W0325 02:21:38.321039  519649 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:21:38.321070  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.321575  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.324184  519649 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:21:38.324252  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:21:38.324270  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:21:38.324324  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.336145  519649 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:21:38.337877  519649 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:21:38.337968  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:21:38.337980  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:21:38.338045  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.376075  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.378999  519649 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:21:38.379027  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:21:38.379082  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.384592  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.391085  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.406139  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:21:38.430033  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.505660  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:21:38.505695  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:21:38.510841  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:21:38.602641  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:21:38.602672  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:21:38.694575  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:21:38.694613  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:21:38.696025  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:21:38.696050  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:21:38.705044  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:21:38.789746  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:21:38.789782  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:21:38.791823  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:21:38.813086  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:21:38.813128  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:21:38.895062  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:21:38.895094  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:21:38.912219  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:21:38.912252  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:21:39.000012  519649 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0325 02:21:39.085188  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:21:39.085284  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:21:39.190895  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:21:39.190929  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:21:39.210367  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:21:39.210397  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:21:39.285312  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:21:39.285346  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:21:39.306663  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:21:39.525639  519649 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220325020326-262786"
	I0325 02:21:40.286516  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:40.404818  519649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.098109992s)
	I0325 02:21:40.407835  519649 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0325 02:21:40.407870  519649 addons.go:417] enableAddons completed in 2.14803176s
	I0325 02:21:42.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:44.779767  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:47.280211  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:49.779262  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:51.779687  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:54.279848  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:56.280050  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:58.779731  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:00.780260  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:03.279281  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:05.279729  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:07.279906  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:09.780010  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:12.280241  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:14.779921  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:17.279893  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:19.779940  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:22.280412  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:24.779387  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:26.779919  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:29.279534  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:31.280132  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:33.779899  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:36.280242  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:38.780135  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:41.280030  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:43.780084  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:46.279339  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:48.279930  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:50.779251  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:52.780056  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:54.780774  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED                  STATE               NAME                      ATTEMPT             POD ID
	f5e3884eab777       6de166512aa22       Less than a second ago   Running             kindnet-cni               4                   f2bb5089f8868
	246eba7f6d94c       6de166512aa22       3 minutes ago            Exited              kindnet-cni               3                   f2bb5089f8868
	dd3e42aaf3dd8       9b7cc99821098       12 minutes ago           Running             kube-proxy                0                   b8a442f1cca90
	21482958b68c2       b07520cd7ab76       12 minutes ago           Running             kube-controller-manager   0                   f48ebb07b3e52
	bc6cf9877becc       25f8c7f3da61c       12 minutes ago           Running             etcd                      0                   083318a0382f5
	6a469f6f4de50       f40be0088a83e       12 minutes ago           Running             kube-apiserver            0                   79ca704e9271f
	c154a93ac7de2       99a3486be4f28       12 minutes ago           Running             kube-scheduler            0                   259e2071a573d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:10:08 UTC, end at Fri 2022-03-25 02:22:56 UTC. --
	Mar 25 02:16:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:14.042551242Z" level=info msg="RemoveContainer for \"d208a110372dd3afe93f06ac2658cfd92f99ac83bbb21db8d077402fd5871907\" returns successfully"
	Mar 25 02:16:26 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:26.413691505Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:16:26 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:26.428015830Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\""
	Mar 25 02:16:26 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:26.428632011Z" level=info msg="StartContainer for \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\""
	Mar 25 02:16:26 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:26.600107317Z" level=info msg="StartContainer for \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\" returns successfully"
	Mar 25 02:19:06 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:06.829730221Z" level=info msg="shim disconnected" id=030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50
	Mar 25 02:19:06 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:06.829803025Z" level=warning msg="cleaning up after shim disconnected" id=030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50 namespace=k8s.io
	Mar 25 02:19:06 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:06.829819468Z" level=info msg="cleaning up dead shim"
	Mar 25 02:19:06 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:06.840242030Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:19:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2545\n"
	Mar 25 02:19:07 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:07.333912812Z" level=info msg="RemoveContainer for \"22b6127ab7f71c86e4615a4dc3e722fd358e082ef1371efb6d3f116104e10ef6\""
	Mar 25 02:19:07 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:07.338288741Z" level=info msg="RemoveContainer for \"22b6127ab7f71c86e4615a4dc3e722fd358e082ef1371efb6d3f116104e10ef6\" returns successfully"
	Mar 25 02:19:34 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:34.413990532Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:19:34 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:34.427091737Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739\""
	Mar 25 02:19:34 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:34.427719553Z" level=info msg="StartContainer for \"246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739\""
	Mar 25 02:19:34 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:34.594883412Z" level=info msg="StartContainer for \"246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739\" returns successfully"
	Mar 25 02:22:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:14.829577310Z" level=info msg="shim disconnected" id=246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739
	Mar 25 02:22:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:14.829643054Z" level=warning msg="cleaning up after shim disconnected" id=246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 namespace=k8s.io
	Mar 25 02:22:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:14.829657322Z" level=info msg="cleaning up dead shim"
	Mar 25 02:22:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:14.839797570Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:22:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2647\n"
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:15.654850281Z" level=info msg="RemoveContainer for \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\""
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:15.659701357Z" level=info msg="RemoveContainer for \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\" returns successfully"
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:55.414143541Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:55.427009616Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db\""
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:55.427567427Z" level=info msg="StartContainer for \"f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db\""
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:55.588285208Z" level=info msg="StartContainer for \"f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220325020956-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220325020956-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_10_39_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:10:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220325020956-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:22:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:21:04 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:21:04 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:21:04 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:21:04 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220325020956-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                3d34c106-4e48-46f4-9bcf-ea4602321294
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.3
	  Kube-Proxy Version:         v1.23.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220325020956-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-kt955                                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220325020956-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220325020956-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-7cpjt                                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220325020956-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x4 over 12m)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7] <==
	* {"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220325020956-262786 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-25T02:10:33.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-03-25T02:15:02.377Z","caller":"traceutil/trace.go:171","msg":"trace[2068280475] linearizableReadLoop","detail":"{readStateIndex:604; appliedIndex:604; }","duration":"150.408748ms","start":"2022-03-25T02:15:02.227Z","end":"2022-03-25T02:15:02.377Z","steps":["trace[2068280475] 'read index received'  (duration: 150.399902ms)","trace[2068280475] 'applied index is now lower than readState.Index'  (duration: 7.441µs)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:15:02.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"173.383294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-25T02:15:02.485Z","caller":"traceutil/trace.go:171","msg":"trace[336424621] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:537; }","duration":"173.538195ms","start":"2022-03-25T02:15:02.312Z","end":"2022-03-25T02:15:02.485Z","steps":["trace[336424621] 'agreement among raft nodes before linearized reading'  (duration: 65.686248ms)","trace[336424621] 'range keys from in-memory index tree'  (duration: 107.671066ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:15:02.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"241.494023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-25T02:15:02.485Z","caller":"traceutil/trace.go:171","msg":"trace[1794756522] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:537; }","duration":"241.70445ms","start":"2022-03-25T02:15:02.243Z","end":"2022-03-25T02:15:02.485Z","steps":["trace[1794756522] 'agreement among raft nodes before linearized reading'  (duration: 133.735426ms)","trace[1794756522] 'count revisions from in-memory index tree'  (duration: 107.74338ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:15:02.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"258.336191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-03-25T02:15:02.485Z","caller":"traceutil/trace.go:171","msg":"trace[1332604840] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:537; }","duration":"258.680861ms","start":"2022-03-25T02:15:02.227Z","end":"2022-03-25T02:15:02.485Z","steps":["trace[1332604840] 'agreement among raft nodes before linearized reading'  (duration: 150.58741ms)","trace[1332604840] 'count revisions from in-memory index tree'  (duration: 107.724613ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-25T02:20:33.420Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":546}
	{"level":"info","ts":"2022-03-25T02:20:33.421Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":546,"took":"560.549µs"}
	
	* 
	* ==> kernel <==
	*  02:22:56 up  5:00,  0 users,  load average: 0.32, 0.60, 1.05
	Linux default-k8s-different-port-20220325020956-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182] <==
	* I0325 02:10:35.407779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 02:10:35.407821       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 02:10:35.407883       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0325 02:10:35.410667       1 cache.go:39] Caches are synced for autoregister controller
	I0325 02:10:35.418114       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0325 02:10:35.426256       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0325 02:10:36.307036       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 02:10:36.307060       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 02:10:36.312536       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0325 02:10:36.315494       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0325 02:10:36.315514       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0325 02:10:36.735448       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 02:10:36.766331       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0325 02:10:36.903400       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0325 02:10:36.909041       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0325 02:10:36.910002       1 controller.go:611] quota admission added evaluator for: endpoints
	I0325 02:10:36.913660       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0325 02:10:37.498548       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0325 02:10:38.290755       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0325 02:10:38.299864       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0325 02:10:38.310209       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0325 02:10:43.395414       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 02:10:50.755106       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:10:51.255928       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0325 02:10:51.929935       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73] <==
	* I0325 02:10:50.352002       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0325 02:10:50.399012       1 shared_informer.go:247] Caches are synced for expand 
	I0325 02:10:50.400127       1 shared_informer.go:247] Caches are synced for attach detach 
	I0325 02:10:50.402593       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0325 02:10:50.413393       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0325 02:10:50.440429       1 shared_informer.go:247] Caches are synced for stateful set 
	I0325 02:10:50.451558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0325 02:10:50.457285       1 shared_informer.go:247] Caches are synced for namespace 
	I0325 02:10:50.498016       1 shared_informer.go:247] Caches are synced for service account 
	I0325 02:10:50.508551       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:10:50.521736       1 shared_informer.go:247] Caches are synced for disruption 
	I0325 02:10:50.521785       1 disruption.go:371] Sending events to api server.
	I0325 02:10:50.534205       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0325 02:10:50.552612       1 shared_informer.go:247] Caches are synced for deployment 
	I0325 02:10:50.555880       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:10:50.761437       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7cpjt"
	I0325 02:10:50.763623       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kt955"
	I0325 02:10:50.972353       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:10:51.015363       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:10:51.015391       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0325 02:10:51.258069       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0325 02:10:51.357575       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dpp64"
	I0325 02:10:51.362162       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9tgbz"
	I0325 02:10:51.549492       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0325 02:10:51.558391       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-dpp64"
	
	* 
	* ==> kube-proxy [dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b] <==
	* I0325 02:10:51.903633       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0325 02:10:51.903717       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0325 02:10:51.903776       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:10:51.926345       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:10:51.926371       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:10:51.926379       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:10:51.926398       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:10:51.926824       1 server.go:656] "Version info" version="v1.23.3"
	I0325 02:10:51.927429       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:10:51.927435       1 config.go:317] "Starting service config controller"
	I0325 02:10:51.927463       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:10:51.927465       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:10:52.028308       1 shared_informer.go:247] Caches are synced for service config 
	I0325 02:10:52.028348       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd] <==
	* W0325 02:10:35.393403       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:10:35.393411       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:35.393426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:10:35.393427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:35.393780       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:35.393803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:35.393937       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 02:10:35.393971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0325 02:10:35.394024       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0325 02:10:35.394054       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:10:35.394671       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:10:35.394703       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:10:35.394724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:10:35.394701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:10:35.394676       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:10:35.394772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:10:36.234022       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:10:36.234107       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:10:36.361824       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:36.361852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:36.372015       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:10:36.372056       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:10:36.495928       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0325 02:10:36.495976       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0325 02:10:38.389536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:10:08 UTC, end at Fri 2022-03-25 02:22:56 UTC. --
	Mar 25 02:21:33 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:33.775065    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:38 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:38.776646    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:43 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:43.778077    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:48 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:48.778777    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:53 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:53.779629    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:58 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:58.780561    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:03 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:03.782282    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:08 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:08.783038    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:13 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:13.784390    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:15.653678    1287 scope.go:110] "RemoveContainer" containerID="030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50"
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:15.654021    1287 scope.go:110] "RemoveContainer" containerID="246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:15.654340    1287 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-kt955_kube-system(87a42b24-60b7-415b-abc9-e574262093c0)\"" pod="kube-system/kindnet-kt955" podUID=87a42b24-60b7-415b-abc9-e574262093c0
	Mar 25 02:22:18 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:18.785813    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:23 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:23.787083    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:28 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:28.411755    1287 scope.go:110] "RemoveContainer" containerID="246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	Mar 25 02:22:28 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:28.412103    1287 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-kt955_kube-system(87a42b24-60b7-415b-abc9-e574262093c0)\"" pod="kube-system/kindnet-kt955" podUID=87a42b24-60b7-415b-abc9-e574262093c0
	Mar 25 02:22:28 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:28.788025    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:33 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:33.789332    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:38 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:38.790632    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:40 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:40.411738    1287 scope.go:110] "RemoveContainer" containerID="246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	Mar 25 02:22:40 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:40.412029    1287 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-kt955_kube-system(87a42b24-60b7-415b-abc9-e574262093c0)\"" pod="kube-system/kindnet-kt955" podUID=87a42b24-60b7-415b-abc9-e574262093c0
	Mar 25 02:22:43 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:43.791933    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:48 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:48.793335    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:53 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:53.794429    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:55.411684    1287 scope.go:110] "RemoveContainer" containerID="246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-64897985d-9tgbz storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 describe pod busybox coredns-64897985d-9tgbz storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod busybox coredns-64897985d-9tgbz storage-provisioner: exit status 1 (61.477589ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dwnt4 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-dwnt4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  49s (x8 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9tgbz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod busybox coredns-64897985d-9tgbz storage-provisioner: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20220325020956-262786
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20220325020956-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4",
	        "Created": "2022-03-25T02:10:07.830065737Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:10:08.208646726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hosts",
	        "LogPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4-json.log",
	        "Name": "/default-k8s-different-port-20220325020956-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220325020956-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220325020956-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220325020956-262786",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220325020956-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220325020956-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "752dba0b0d51e54f65ae14a0ffc9beb457cc13e80db6430b791d2057b780914e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49574"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49573"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49570"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49572"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49571"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/752dba0b0d51",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220325020956-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e271f66fa8d",
	                        "default-k8s-different-port-20220325020956-262786"
	                    ],
	                    "NetworkID": "c5c0224540019d877be5e36bfc556dc0a2d83980f6e5b563be26e38eaad27a38",
	                    "EndpointID": "ded2360703a0715d75d023434cbc7944232d0b2cfe6e083bf6f1fbb0113e0018",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220325020956-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | disable-driver-mounts-20220325020956-262786      | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:56 UTC | Fri, 25 Mar 2022 02:09:56 UTC |
	|         | disable-driver-mounts-20220325020956-262786                |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:09:12 UTC | Fri, 25 Mar 2022 02:14:36 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:47 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:47 UTC | Fri, 25 Mar 2022 02:14:48 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:49 UTC | Fri, 25 Mar 2022 02:14:50 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:52 UTC | Fri, 25 Mar 2022 02:14:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:51 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:23 UTC | Fri, 25 Mar 2022 02:16:24 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:24 UTC | Fri, 25 Mar 2022 02:16:25 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:25 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:35 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:46 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:47 UTC | Fri, 25 Mar 2022 02:16:48 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:48 UTC | Fri, 25 Mar 2022 02:16:51 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:51 UTC | Fri, 25 Mar 2022 02:16:52 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:19:35 UTC | Fri, 25 Mar 2022 02:19:36 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:55 UTC | Fri, 25 Mar 2022 02:22:56 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:16:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:16:35.482311  519649 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:16:35.482451  519649 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:35.482462  519649 out.go:310] Setting ErrFile to fd 2...
	I0325 02:16:35.482467  519649 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:35.482575  519649 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:16:35.482813  519649 out.go:304] Setting JSON to false
	I0325 02:16:35.484309  519649 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17668,"bootTime":1648156928,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:16:35.484382  519649 start.go:125] virtualization: kvm guest
	I0325 02:16:35.487068  519649 out.go:176] * [no-preload-20220325020326-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:16:35.488730  519649 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:16:35.487298  519649 notify.go:193] Checking for updates...
	I0325 02:16:35.490311  519649 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:16:35.491877  519649 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:35.493486  519649 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:16:35.495057  519649 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:16:35.496266  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:35.497491  519649 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:16:35.540694  519649 docker.go:136] docker version: linux-20.10.14
	I0325 02:16:35.540841  519649 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:35.641548  519649 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:35.575580325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:16:35.641678  519649 docker.go:253] overlay module found
	I0325 02:16:35.644240  519649 out.go:176] * Using the docker driver based on existing profile
	I0325 02:16:35.644293  519649 start.go:284] selected driver: docker
	I0325 02:16:35.644302  519649 start.go:801] validating driver "docker" against &{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:35.644458  519649 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:16:35.644501  519649 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:35.644530  519649 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:35.646030  519649 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:35.646742  519649 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:35.752278  519649 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:35.682730162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:16:35.752465  519649 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:35.752492  519649 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:16:35.754658  519649 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:35.754778  519649 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:16:35.754810  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:16:35.754821  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:35.754840  519649 start_flags.go:304] config:
	{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:35.756791  519649 out.go:176] * Starting control plane node no-preload-20220325020326-262786 in cluster no-preload-20220325020326-262786
	I0325 02:16:35.756829  519649 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:16:35.758358  519649 out.go:176] * Pulling base image ...
	I0325 02:16:35.758390  519649 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:35.758492  519649 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:16:35.758563  519649 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:16:35.758688  519649 cache.go:107] acquiring lock: {Name:mkadc5033eb4d9179acd1c6e7ff0e25d4981568c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758710  519649 cache.go:107] acquiring lock: {Name:mk0987b0339865c5416a6746bce8670ad78c0a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758707  519649 cache.go:107] acquiring lock: {Name:mkdc6a82c5ad28a9b97463884b87944eaef2fef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758830  519649 cache.go:107] acquiring lock: {Name:mk140b8e2c06d387b642b813a7efd82a9f19d6c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758829  519649 cache.go:107] acquiring lock: {Name:mk8ed79f1ecf0bc83b0d3ead06534032f65db356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758880  519649 cache.go:107] acquiring lock: {Name:mkcb4c0577b6fb6a4cc15cd1cfc04742789dcc24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758920  519649 cache.go:107] acquiring lock: {Name:mk1134717661547774a1dd6d6e2854162646543d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758911  519649 cache.go:107] acquiring lock: {Name:mk61dd10aefdeb5283d07e3024688797852e36d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759022  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0325 02:16:35.759030  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 exists
	I0325 02:16:35.759047  519649 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 372.469µs
	I0325 02:16:35.759047  519649 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0" took 131.834µs
	I0325 02:16:35.759061  519649 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0325 02:16:35.759064  519649 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 succeeded
	I0325 02:16:35.758904  519649 cache.go:107] acquiring lock: {Name:mkcf6d57389d13d4e31240b1cdf9af5455cf82f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759073  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 exists
	I0325 02:16:35.759078  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0325 02:16:35.759099  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0325 02:16:35.759090  519649 cache.go:107] acquiring lock: {Name:mkd382d09a068cdb98cdc085f7d3d174faef8f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759109  519649 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 210.056µs
	I0325 02:16:35.759116  519649 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0325 02:16:35.759104  519649 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 350.331µs
	I0325 02:16:35.759086  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0325 02:16:35.759124  519649 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0325 02:16:35.759102  519649 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0" took 354.111µs
	I0325 02:16:35.759149  519649 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759143  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 exists
	I0325 02:16:35.759144  519649 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 439.796µs
	I0325 02:16:35.759168  519649 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0325 02:16:35.759127  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0325 02:16:35.759167  519649 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0" took 339.705µs
	I0325 02:16:35.759178  519649 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759105  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0325 02:16:35.759188  519649 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 362.557µs
	I0325 02:16:35.759203  519649 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0325 02:16:35.759199  519649 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 504.454µs
	I0325 02:16:35.759217  519649 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0325 02:16:35.759228  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 exists
	I0325 02:16:35.759276  519649 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0" took 279.744µs
	I0325 02:16:35.759305  519649 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759331  519649 cache.go:87] Successfully saved all images to host disk.
	I0325 02:16:35.794208  519649 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:16:35.794250  519649 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:16:35.794266  519649 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:16:35.794300  519649 start.go:348] acquiring machines lock for no-preload-20220325020326-262786: {Name:mk0b68e00c1687cd51ada59f78a2181cd58687dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.794388  519649 start.go:352] acquired machines lock for "no-preload-20220325020326-262786" in 69.622µs
	I0325 02:16:35.794408  519649 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:16:35.794412  519649 fix.go:55] fixHost starting: 
	I0325 02:16:35.794639  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:16:35.829675  519649 fix.go:108] recreateIfNeeded on no-preload-20220325020326-262786: state=Stopped err=<nil>
	W0325 02:16:35.829710  519649 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:16:30.919166  516439 api_server.go:165] Checking apiserver status ...
	I0325 02:16:30.919257  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:30.927996  516439 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:30.928016  516439 api_server.go:165] Checking apiserver status ...
	I0325 02:16:30.928054  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:30.936308  516439 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:30.936337  516439 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:16:30.936344  516439 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:16:30.936355  516439 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:16:30.936402  516439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:30.961816  516439 cri.go:87] found id: "e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8"
	I0325 02:16:30.961847  516439 cri.go:87] found id: "c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5"
	I0325 02:16:30.961853  516439 cri.go:87] found id: "a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966"
	I0325 02:16:30.961869  516439 cri.go:87] found id: "576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031"
	I0325 02:16:30.961874  516439 cri.go:87] found id: "016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca"
	I0325 02:16:30.961880  516439 cri.go:87] found id: "74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8"
	I0325 02:16:30.961885  516439 cri.go:87] found id: ""
	I0325 02:16:30.961891  516439 cri.go:232] Stopping containers: [e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8 c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5 a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966 576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031 016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca 74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8]
	I0325 02:16:30.961942  516439 ssh_runner.go:195] Run: which crictl
	I0325 02:16:30.965080  516439 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e3ee84b63ba33bcbfea4203eedc8a7d9573afda58890320b68f36d9cdf3bf2a8 c16f6adb1790c3742b747bf61bfd1f357b72b0991ac3be7dbf874baa850fa2f5 a9ae918cd79ec7953a8c2b8e19f9dd9716b4e319662e0b15cd6c7656e2668966 576c531344a89713a22df123a23d95cf4df6514aa92aeadd890dd6891ea08031 016ff43b53acf403c3cade0a6b87ed824539070c26fb0a1a43b665e04899b8ca 74fb5be813cd2fffa2e56033edaaaac236ae7d6186cc67ee6afceba343a5edb8
	I0325 02:16:30.990650  516439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:16:31.001312  516439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:16:31.009030  516439 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 25 02:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 25 02:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Mar 25 02:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Mar 25 02:15 /etc/kubernetes/scheduler.conf
	
	I0325 02:16:31.009104  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:16:31.016238  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:16:31.022869  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:16:31.029565  516439 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:31.029621  516439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:16:31.036474  516439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:16:31.043067  516439 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:31.043125  516439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:16:31.049642  516439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:31.056883  516439 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:31.056914  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.101487  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.789161  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.922185  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:31.984722  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:32.028325  516439 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:32.028393  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:32.537756  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:33.037616  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:33.537339  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:34.037634  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:34.537880  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.037295  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.538072  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:35.968327  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:37.968941  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:35.833187  519649 out.go:176] * Restarting existing docker container for "no-preload-20220325020326-262786" ...
	I0325 02:16:35.833270  519649 cli_runner.go:133] Run: docker start no-preload-20220325020326-262786
	I0325 02:16:36.223867  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:16:36.260748  519649 kic.go:420] container "no-preload-20220325020326-262786" state is running.
	I0325 02:16:36.261158  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:36.295907  519649 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:16:36.296110  519649 machine.go:88] provisioning docker machine ...
	I0325 02:16:36.296134  519649 ubuntu.go:169] provisioning hostname "no-preload-20220325020326-262786"
	I0325 02:16:36.296174  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:36.331323  519649 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:36.331546  519649 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49589 <nil> <nil>}
	I0325 02:16:36.331564  519649 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20220325020326-262786 && echo "no-preload-20220325020326-262786" | sudo tee /etc/hostname
	I0325 02:16:36.332175  519649 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50526->127.0.0.1:49589: read: connection reset by peer
	I0325 02:16:39.464533  519649 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20220325020326-262786
	
	I0325 02:16:39.464619  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:39.500131  519649 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:39.500311  519649 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49589 <nil> <nil>}
	I0325 02:16:39.500341  519649 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220325020326-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220325020326-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220325020326-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:16:39.619029  519649 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:16:39.619064  519649 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:16:39.619085  519649 ubuntu.go:177] setting up certificates
	I0325 02:16:39.619100  519649 provision.go:83] configureAuth start
	I0325 02:16:39.619161  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:39.653347  519649 provision.go:138] copyHostCerts
	I0325 02:16:39.653407  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:16:39.653421  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:16:39.653484  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:16:39.653581  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:16:39.653592  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:16:39.653616  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:16:39.653673  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:16:39.653687  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:16:39.653707  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:16:39.653765  519649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220325020326-262786 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220325020326-262786]
	I0325 02:16:39.955829  519649 provision.go:172] copyRemoteCerts
	I0325 02:16:39.955898  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:16:39.955933  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:39.989898  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.079856  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 02:16:40.099567  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:16:40.119824  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0325 02:16:40.140874  519649 provision.go:86] duration metric: configureAuth took 521.759605ms
	I0325 02:16:40.140906  519649 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:16:40.141163  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:40.141185  519649 machine.go:91] provisioned docker machine in 3.845060196s
	I0325 02:16:40.141193  519649 start.go:302] post-start starting for "no-preload-20220325020326-262786" (driver="docker")
	I0325 02:16:40.141201  519649 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:16:40.141260  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:16:40.141308  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.180699  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.271442  519649 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:16:40.274944  519649 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:16:40.275028  519649 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:16:40.275041  519649 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:16:40.275051  519649 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:16:40.275064  519649 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:16:40.275115  519649 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:16:40.275176  519649 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:16:40.275263  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:16:40.282729  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:40.301545  519649 start.go:305] post-start completed in 160.334219ms
	I0325 02:16:40.301629  519649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:16:40.301692  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.340243  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.427579  519649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:16:40.431311  519649 fix.go:57] fixHost completed within 4.636891748s
	I0325 02:16:40.431332  519649 start.go:81] releasing machines lock for "no-preload-20220325020326-262786", held for 4.636932836s
	I0325 02:16:40.431419  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:40.471929  519649 ssh_runner.go:195] Run: systemctl --version
	I0325 02:16:40.471972  519649 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:16:40.471994  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.472031  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:36.038098  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:36.537401  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:37.037404  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:37.537180  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:38.037556  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:38.099215  516439 api_server.go:71] duration metric: took 6.070889838s to wait for apiserver process to appear ...
	I0325 02:16:38.099286  516439 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:16:38.099301  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:38.099706  516439 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0325 02:16:38.600314  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:41.706206  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:16:41.706241  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:16:42.100667  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:42.105436  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:16:42.105478  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:16:42.599961  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:42.605081  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:16:42.605109  516439 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:16:43.100711  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:43.105895  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0325 02:16:43.112809  516439 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:16:43.112833  516439 api_server.go:130] duration metric: took 5.013539931s to wait for apiserver health ...
	I0325 02:16:43.112846  516439 cni.go:93] Creating CNI manager for ""
	I0325 02:16:43.112855  516439 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:43.115000  516439 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:16:43.115081  516439 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:16:43.119112  516439 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:16:43.119136  516439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:16:43.132304  516439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:16:43.929421  516439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:16:43.937528  516439 system_pods.go:59] 9 kube-system pods found
	I0325 02:16:43.937572  516439 system_pods.go:61] "coredns-64897985d-p65tg" [e65563a2-916d-4e4f-9899-45abcf6e43e6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937583  516439 system_pods.go:61] "etcd-newest-cni-20220325021454-262786" [301b74c1-25bb-412c-8781-5b02da9c4093] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:16:43.937590  516439 system_pods.go:61] "kindnet-td766" [40872158-4184-4df2-ae83-e42d228b4223] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:16:43.937600  516439 system_pods.go:61] "kube-apiserver-newest-cni-20220325021454-262786" [d2e43879-332a-448a-97c5-1a2bea717597] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:16:43.937605  516439 system_pods.go:61] "kube-controller-manager-newest-cni-20220325021454-262786" [8af92ea2-d71d-4620-ac1c-594d1cf3cd2b] Running
	I0325 02:16:43.937612  516439 system_pods.go:61] "kube-proxy-fj7dd" [1af095d5-b04f-4be9-bd3b-e2c7a2b373b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:16:43.937621  516439 system_pods.go:61] "kube-scheduler-newest-cni-20220325021454-262786" [33a2b8ac-d72f-4399-971a-38f587c9994c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:16:43.937627  516439 system_pods.go:61] "metrics-server-b955d9d8-sbk6n" [80ba7292-f3cd-4e79-88b4-6e9f5d1e738e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937636  516439 system_pods.go:61] "storage-provisioner" [28ecf9b3-cf1c-495e-a39e-8fe37150d662] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:43.937642  516439 system_pods.go:74] duration metric: took 8.196027ms to wait for pod list to return data ...
	I0325 02:16:43.937652  516439 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:16:43.940863  516439 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:16:43.940894  516439 node_conditions.go:123] node cpu capacity is 8
	I0325 02:16:43.940904  516439 node_conditions.go:105] duration metric: took 3.247281ms to run NodePressure ...
	I0325 02:16:43.940927  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:44.087258  516439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:16:44.094685  516439 ops.go:34] apiserver oom_adj: -16
	I0325 02:16:44.094721  516439 kubeadm.go:605] restartCluster took 16.202985802s
	I0325 02:16:44.094732  516439 kubeadm.go:393] StartCluster complete in 16.248550193s
	I0325 02:16:44.094758  516439 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:44.094885  516439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:44.096265  516439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:44.101456  516439 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220325021454-262786" rescaled to 1
	I0325 02:16:44.101529  516439 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:16:44.103443  516439 out.go:176] * Verifying Kubernetes components...
	I0325 02:16:44.103511  516439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:16:44.101558  516439 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0325 02:16:44.103612  516439 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103628  516439 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103636  516439 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.103642  516439 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:16:44.103644  516439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220325021454-262786"
	I0325 02:16:44.103659  516439 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103685  516439 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220325021454-262786"
	I0325 02:16:44.103693  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	W0325 02:16:44.103700  516439 addons.go:165] addon metrics-server should already be in state true
	I0325 02:16:44.103616  516439 addons.go:65] Setting dashboard=true in profile "newest-cni-20220325021454-262786"
	I0325 02:16:44.103733  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:44.103732  516439 addons.go:153] Setting addon dashboard=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.103905  516439 addons.go:165] addon dashboard should already be in state true
	I0325 02:16:44.103988  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:44.104010  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.101542  516439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:16:44.104212  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.101745  516439 config.go:176] Loaded profile config "newest-cni-20220325021454-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:44.104241  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.104495  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.121208  516439 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:44.121280  516439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:44.155459  516439 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:16:44.155647  516439 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:16:44.155665  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:16:44.155751  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.162366  516439 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:16:44.163990  516439 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:16:44.161446  516439 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220325021454-262786"
	W0325 02:16:44.164031  516439 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:16:44.164070  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:16:44.164081  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:16:44.164083  516439 host.go:66] Checking if "newest-cni-20220325021454-262786" exists ...
	I0325 02:16:40.468819  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:42.968016  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:44.165737  516439 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:16:44.164138  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.165834  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:16:44.165852  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:16:44.164608  516439 cli_runner.go:133] Run: docker container inspect newest-cni-20220325021454-262786 --format={{.State.Status}}
	I0325 02:16:44.165907  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.192351  516439 api_server.go:71] duration metric: took 90.77915ms to wait for apiserver process to appear ...
	I0325 02:16:44.192383  516439 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:16:44.192398  516439 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0325 02:16:44.192396  516439 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0325 02:16:44.198241  516439 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0325 02:16:44.199343  516439 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:16:44.199364  516439 api_server.go:130] duration metric: took 6.9739ms to wait for apiserver health ...
	I0325 02:16:44.199376  516439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:16:44.203708  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.209623  516439 system_pods.go:59] 9 kube-system pods found
	I0325 02:16:44.209665  516439 system_pods.go:61] "coredns-64897985d-p65tg" [e65563a2-916d-4e4f-9899-45abcf6e43e6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209676  516439 system_pods.go:61] "etcd-newest-cni-20220325021454-262786" [301b74c1-25bb-412c-8781-5b02da9c4093] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:16:44.209686  516439 system_pods.go:61] "kindnet-td766" [40872158-4184-4df2-ae83-e42d228b4223] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:16:44.209706  516439 system_pods.go:61] "kube-apiserver-newest-cni-20220325021454-262786" [d2e43879-332a-448a-97c5-1a2bea717597] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:16:44.209719  516439 system_pods.go:61] "kube-controller-manager-newest-cni-20220325021454-262786" [8af92ea2-d71d-4620-ac1c-594d1cf3cd2b] Running
	I0325 02:16:44.209734  516439 system_pods.go:61] "kube-proxy-fj7dd" [1af095d5-b04f-4be9-bd3b-e2c7a2b373b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:16:44.209784  516439 system_pods.go:61] "kube-scheduler-newest-cni-20220325021454-262786" [33a2b8ac-d72f-4399-971a-38f587c9994c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:16:44.209802  516439 system_pods.go:61] "metrics-server-b955d9d8-sbk6n" [80ba7292-f3cd-4e79-88b4-6e9f5d1e738e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209812  516439 system_pods.go:61] "storage-provisioner" [28ecf9b3-cf1c-495e-a39e-8fe37150d662] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:16:44.209818  516439 system_pods.go:74] duration metric: took 10.436764ms to wait for pod list to return data ...
	I0325 02:16:44.209858  516439 default_sa.go:34] waiting for default service account to be created ...
	I0325 02:16:44.215792  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.216231  516439 default_sa.go:45] found service account: "default"
	I0325 02:16:44.216319  516439 default_sa.go:55] duration metric: took 6.410246ms for default service account to be created ...
	I0325 02:16:44.216344  516439 kubeadm.go:548] duration metric: took 114.781757ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0325 02:16:44.216396  516439 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:16:44.219134  516439 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:16:44.219161  516439 node_conditions.go:123] node cpu capacity is 8
	I0325 02:16:44.219175  516439 node_conditions.go:105] duration metric: took 2.773273ms to run NodePressure ...
	I0325 02:16:44.219210  516439 start.go:213] waiting for startup goroutines ...
	I0325 02:16:44.221833  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.222359  516439 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:16:44.222381  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:16:44.222432  516439 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220325021454-262786
	I0325 02:16:44.261798  516439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49584 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220325021454-262786/id_rsa Username:docker}
	I0325 02:16:44.319771  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:16:44.319803  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:16:44.319846  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:16:44.321101  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:16:44.321125  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:16:44.334351  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:16:44.334375  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:16:44.334647  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:16:44.334666  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:16:44.349057  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:16:44.349094  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:16:44.349070  516439 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:16:44.349161  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:16:44.389276  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:16:44.392743  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:16:44.393530  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:16:44.393550  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:16:44.410521  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:16:44.410552  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:16:44.496572  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:16:44.496606  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:16:44.515360  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:16:44.515405  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:16:44.600692  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:16:44.600722  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:16:44.688604  516439 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:16:44.688635  516439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:16:44.707599  516439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:16:44.889028  516439 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220325021454-262786"
	I0325 02:16:45.068498  516439 out.go:176] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0325 02:16:45.068530  516439 addons.go:417] enableAddons completed in 966.974309ms
	I0325 02:16:45.105519  516439 start.go:499] kubectl: 1.23.5, cluster: 1.23.4-rc.0 (minor skew: 0)
	I0325 02:16:45.107876  516439 out.go:176] * Done! kubectl is now configured to use "newest-cni-20220325021454-262786" cluster and "default" namespace by default
	I0325 02:16:40.514344  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.516013  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.624849  519649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:16:40.637160  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:16:40.647198  519649 docker.go:183] disabling docker service ...
	I0325 02:16:40.647293  519649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:16:40.657506  519649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:16:40.667205  519649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:16:40.756526  519649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:16:40.838425  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:16:40.849201  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:16:40.862764  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:16:40.877296  519649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:16:40.884604  519649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:16:40.891942  519649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:16:40.968097  519649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:16:41.042195  519649 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:16:41.042340  519649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:16:41.046206  519649 start.go:462] Will wait 60s for crictl version
	I0325 02:16:41.046277  519649 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:41.069914  519649 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:16:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:16:44.968453  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:47.468552  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.117787  519649 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:52.144102  519649 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:16:52.144170  519649 ssh_runner.go:195] Run: containerd --version
	I0325 02:16:52.168021  519649 ssh_runner.go:195] Run: containerd --version
	I0325 02:16:52.192255  519649 out.go:176] * Preparing Kubernetes v1.23.4-rc.0 on containerd 1.5.10 ...
	I0325 02:16:52.192348  519649 cli_runner.go:133] Run: docker network inspect no-preload-20220325020326-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:16:52.228171  519649 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0325 02:16:52.231817  519649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:16:49.968284  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.467868  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:54.468272  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:52.244329  519649 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:16:52.244416  519649 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:52.244468  519649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:16:52.271321  519649 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:16:52.271344  519649 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:16:52.271385  519649 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:16:52.298329  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:16:52.298360  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:52.298373  519649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:16:52.298389  519649 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220325020326-262786 NodeName:no-preload-20220325020326-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:16:52.298577  519649 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220325020326-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:16:52.298682  519649 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220325020326-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 02:16:52.298747  519649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4-rc.0
	I0325 02:16:52.306846  519649 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:16:52.306918  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:16:52.315084  519649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 02:16:52.328704  519649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0325 02:16:52.342299  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0325 02:16:52.355577  519649 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:16:52.358463  519649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:16:52.367826  519649 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786 for IP: 192.168.67.2
	I0325 02:16:52.367934  519649 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:16:52.367989  519649 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:16:52.368051  519649 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.key
	I0325 02:16:52.368101  519649 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key.c7fa3a9e
	I0325 02:16:52.368132  519649 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key
	I0325 02:16:52.368232  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:16:52.368263  519649 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:16:52.368275  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:16:52.368299  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:16:52.368335  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:16:52.368357  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:16:52.368397  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:52.368977  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:16:52.386350  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 02:16:52.404078  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:16:52.422535  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:16:52.441293  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:16:52.458689  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:16:52.476708  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:16:52.494410  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:16:52.511769  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:16:52.529287  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:16:52.546092  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:16:52.562842  519649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:16:52.574641  519649 ssh_runner.go:195] Run: openssl version
	I0325 02:16:52.579369  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:16:52.586915  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.590088  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.590144  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.595082  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:16:52.601804  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:16:52.608863  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.611860  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.611906  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.616573  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:16:52.622899  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:16:52.629919  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.632815  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.632859  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.637417  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:16:52.644239  519649 kubeadm.go:391] StartCluster: {Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:52.644354  519649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:16:52.644394  519649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:52.669210  519649 cri.go:87] found id: "e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	I0325 02:16:52.669242  519649 cri.go:87] found id: "0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc"
	I0325 02:16:52.669249  519649 cri.go:87] found id: "ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252"
	I0325 02:16:52.669254  519649 cri.go:87] found id: "fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b"
	I0325 02:16:52.669270  519649 cri.go:87] found id: "e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710"
	I0325 02:16:52.669279  519649 cri.go:87] found id: "b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244"
	I0325 02:16:52.669283  519649 cri.go:87] found id: ""
	I0325 02:16:52.669324  519649 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:16:52.683722  519649 cri.go:114] JSON = null
	W0325 02:16:52.683785  519649 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0325 02:16:52.683838  519649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:16:52.690850  519649 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:16:52.690872  519649 kubeadm.go:601] restartCluster start
	I0325 02:16:52.690912  519649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:16:52.697516  519649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:52.698228  519649 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220325020326-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:52.698600  519649 kubeconfig.go:127] "no-preload-20220325020326-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:16:52.699273  519649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:52.700696  519649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:16:52.707667  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:52.707717  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:52.715666  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:52.916102  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:52.916184  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:52.925481  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.116769  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.116855  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.125381  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.316671  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.316772  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.325189  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.516483  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.516581  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.525793  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.716104  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.716183  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.724648  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.915849  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.915940  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.924616  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.115776  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.115861  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.124538  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.316714  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.316801  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.325601  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.515836  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.515913  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.524158  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.716463  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.716549  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.725607  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.915823  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.915903  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.924487  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.116802  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.116901  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.126160  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.316446  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.316526  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.324891  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:56.468419  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:58.968213  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:16:55.516554  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.516656  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.525265  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.716429  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.716509  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.725617  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.725645  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.725683  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.733139  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.733164  519649 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:16:55.733174  519649 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:16:55.733193  519649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:16:55.733247  519649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:55.758794  519649 cri.go:87] found id: "e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	I0325 02:16:55.758826  519649 cri.go:87] found id: "0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc"
	I0325 02:16:55.758835  519649 cri.go:87] found id: "ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252"
	I0325 02:16:55.758843  519649 cri.go:87] found id: "fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b"
	I0325 02:16:55.758852  519649 cri.go:87] found id: "e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710"
	I0325 02:16:55.758860  519649 cri.go:87] found id: "b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244"
	I0325 02:16:55.758867  519649 cri.go:87] found id: ""
	I0325 02:16:55.758874  519649 cri.go:232] Stopping containers: [e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252 fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710 b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244]
	I0325 02:16:55.758928  519649 ssh_runner.go:195] Run: which crictl
	I0325 02:16:55.762024  519649 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252 fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710 b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244
	I0325 02:16:55.786603  519649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:16:55.796385  519649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:16:55.803085  519649 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 25 02:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Mar 25 02:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:03 /etc/kubernetes/scheduler.conf
	
	I0325 02:16:55.803151  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:16:55.809939  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:16:55.816507  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:16:55.822744  519649 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.822807  519649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:16:55.828985  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:16:55.835918  519649 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.835967  519649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:16:55.843105  519649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:55.850384  519649 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:55.850419  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:55.893825  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.667540  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.802771  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.854899  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.922247  519649 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:56.922327  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:57.431777  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:57.932218  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:58.431927  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:58.931629  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:59.432174  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:59.932237  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:00.431697  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:00.968915  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:03.468075  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:00.932213  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:01.431617  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:01.931744  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.431861  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.931562  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.996665  519649 api_server.go:71] duration metric: took 6.074430006s to wait for apiserver process to appear ...
	I0325 02:17:02.996706  519649 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:17:02.996721  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:02.997178  519649 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0325 02:17:03.497954  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:06.096426  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:17:06.096466  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:17:06.497872  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:06.502718  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:17:06.502746  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:17:06.998348  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:07.002908  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:17:07.002934  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:17:07.497481  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:07.502551  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0325 02:17:07.508747  519649 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:17:07.508776  519649 api_server.go:130] duration metric: took 4.512062997s to wait for apiserver health ...
	I0325 02:17:07.508793  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:17:07.508800  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:17:05.468506  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:07.968498  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:07.511699  519649 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:17:07.511795  519649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:17:07.515865  519649 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:17:07.515896  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:17:07.530511  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:17:08.432775  519649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:17:08.439909  519649 system_pods.go:59] 9 kube-system pods found
	I0325 02:17:08.439946  519649 system_pods.go:61] "coredns-64897985d-b9827" [29b80e2f-89fe-4b4a-a931-333a59535d4c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.439962  519649 system_pods.go:61] "etcd-no-preload-20220325020326-262786" [add71311-f324-4612-b981-ca42b0ef813c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:17:08.439971  519649 system_pods.go:61] "kindnet-nhlsm" [57939cf7-016c-486a-8a08-466ff1515c1f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:17:08.439977  519649 system_pods.go:61] "kube-apiserver-no-preload-20220325020326-262786" [f9b1f749-8d63-446e-bd36-152e849a5bf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:17:08.439990  519649 system_pods.go:61] "kube-controller-manager-no-preload-20220325020326-262786" [a229a2c1-6ed0-434a-8b3c-7951beee3fe0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:17:08.439994  519649 system_pods.go:61] "kube-proxy-l6tg2" [f41c6b8d-0d57-4096-af80-8e9a7da29b60] Running
	I0325 02:17:08.440003  519649 system_pods.go:61] "kube-scheduler-no-preload-20220325020326-262786" [a41de5aa-8f3c-46cd-bc8e-85c035c31512] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:17:08.440012  519649 system_pods.go:61] "metrics-server-b955d9d8-dzczk" [5c06ad70-f575-44ee-8a14-d4d2b172ccf2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.440019  519649 system_pods.go:61] "storage-provisioner" [d778a38b-7ebf-4a50-956a-6628a9055852] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.440027  519649 system_pods.go:74] duration metric: took 7.223437ms to wait for pod list to return data ...
	I0325 02:17:08.440037  519649 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:17:08.443080  519649 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:17:08.443104  519649 node_conditions.go:123] node cpu capacity is 8
	I0325 02:17:08.443116  519649 node_conditions.go:105] duration metric: took 3.071905ms to run NodePressure ...
	I0325 02:17:08.443134  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:17:08.590505  519649 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:17:08.611319  519649 kubeadm.go:752] kubelet initialised
	I0325 02:17:08.611346  519649 kubeadm.go:753] duration metric: took 20.794737ms waiting for restarted kubelet to initialise ...
	I0325 02:17:08.611354  519649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:17:08.617229  519649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" ...
	I0325 02:17:09.968693  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:12.468173  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:10.623188  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:13.123899  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:14.968191  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:17.468172  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:15.623504  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:18.123637  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:19.968292  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:22.468166  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:20.623486  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:22.624740  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:24.625363  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:24.968021  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:27.468041  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:29.468565  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:27.123366  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:29.123949  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:31.968178  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:33.968823  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:31.623836  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:34.123164  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:36.468695  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:38.967993  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:36.123971  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:38.623418  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:41.468821  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:43.968154  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:40.623650  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:43.124505  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:45.968404  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:47.968532  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:45.624087  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:48.123363  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:50.468244  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:52.468797  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:54.468960  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:50.623592  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:52.624829  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:55.124055  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:56.968701  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:59.467918  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:17:57.623248  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:59.623684  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:01.468256  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:03.967939  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:01.623899  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:04.123560  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:05.968665  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:08.467884  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:06.124019  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:08.623070  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:10.468279  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:12.468416  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:11.123374  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:13.623289  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:14.967919  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:16.968150  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:19.468065  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:15.623672  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:18.124412  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:21.468475  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:23.968850  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:20.624197  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:23.123807  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:25.124272  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:26.468766  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:28.968612  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:27.624274  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:30.123559  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:30.968779  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:33.468295  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:32.623099  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:34.623275  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:35.468741  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:37.968661  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:36.623368  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:38.623990  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:40.468313  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:42.468818  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:40.624162  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:43.123758  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:44.968325  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:47.468369  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:45.623667  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:47.623731  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:50.123654  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:49.968304  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:51.968856  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:54.468654  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:52.623485  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:54.623818  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:56.968573  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:58.968977  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:18:57.123496  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:59.124157  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:01.470174  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:03.968282  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:01.623917  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:04.123410  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:05.968412  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:07.968843  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:06.124235  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:08.124325  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:10.467818  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:12.468731  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:10.623795  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:13.123199  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:15.124279  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:14.967929  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:16.968185  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:18.968894  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:17.623867  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:20.124329  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:21.468097  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:23.468504  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:22.622920  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:24.623325  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:25.968086  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:28.467817  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:26.623622  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:29.123797  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:30.467966  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:32.967797  496534 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
	I0325 02:19:34.470135  496534 node_ready.go:38] duration metric: took 4m0.008592307s waiting for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
	I0325 02:19:34.472535  496534 out.go:176] 
	W0325 02:19:34.472693  496534 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:19:34.472714  496534 out.go:241] * 
	W0325 02:19:34.473654  496534 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:19:31.124139  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:33.623203  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:35.623380  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:37.623882  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:40.123633  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:42.124935  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:44.622980  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:46.623461  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:48.623960  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:51.123453  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:53.124042  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:55.623040  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:57.623769  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:59.624128  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:02.123372  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:04.623176  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:06.623948  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:09.123779  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:11.124054  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:13.624042  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:16.123406  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:18.124039  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:20.124112  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:22.623270  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:24.623999  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:27.123242  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:29.124370  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:31.124412  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:33.623358  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:36.123946  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:38.124288  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:40.623896  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:43.123554  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:45.124025  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:47.623811  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:50.123422  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:52.123897  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:54.124053  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:56.624021  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:59.123559  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:01.124195  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:03.623329  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:05.623709  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:08.123740  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:08.621002  519649 pod_ready.go:81] duration metric: took 4m0.003733568s waiting for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" ...
	E0325 02:21:08.621038  519649 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:21:08.621065  519649 pod_ready.go:38] duration metric: took 4m0.009701445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:21:08.621095  519649 kubeadm.go:605] restartCluster took 4m15.930218796s
	W0325 02:21:08.621264  519649 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:21:08.621308  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:21:10.388277  519649 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.766939087s)
	I0325 02:21:10.388356  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:21:10.397928  519649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:21:10.405143  519649 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:21:10.405196  519649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:21:10.412369  519649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:21:10.412423  519649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:21:23.839012  519649 out.go:203]   - Generating certificates and keys ...
	I0325 02:21:23.841551  519649 out.go:203]   - Booting up control plane ...
	I0325 02:21:23.844819  519649 out.go:203]   - Configuring RBAC rules ...
	I0325 02:21:23.846446  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:21:23.846463  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:21:23.848159  519649 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:21:23.848260  519649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:21:23.851792  519649 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:21:23.851811  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:21:23.864694  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:21:24.545001  519649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:21:24.545086  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:24.545087  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=no-preload-20220325020326-262786 minikube.k8s.io/updated_at=2022_03_25T02_21_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:24.552352  519649 ops.go:34] apiserver oom_adj: -16
	I0325 02:21:24.617795  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:25.174278  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:25.675236  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:26.175029  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:26.674497  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:27.174775  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:27.674258  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:28.174824  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:28.674646  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:29.174252  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:29.675260  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:30.175187  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:30.674792  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:31.174185  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:31.674250  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:32.174501  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:32.675112  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:33.174579  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:33.674182  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:34.174816  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:34.674733  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:35.174444  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:35.675064  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:36.174387  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:36.674259  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.174753  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.675061  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.741138  519649 kubeadm.go:1020] duration metric: took 13.196118254s to wait for elevateKubeSystemPrivileges.
	I0325 02:21:37.741171  519649 kubeadm.go:393] StartCluster complete in 4m45.096948299s
	I0325 02:21:37.741190  519649 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:21:37.741314  519649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:21:37.742545  519649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:21:38.259722  519649 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220325020326-262786" rescaled to 1
	I0325 02:21:38.259791  519649 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:21:38.261749  519649 out.go:176] * Verifying Kubernetes components...
	I0325 02:21:38.259824  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:21:38.261828  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:21:38.259842  519649 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:21:38.261923  519649 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.261953  519649 addons.go:65] Setting metrics-server=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.261962  519649 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220325020326-262786"
	I0325 02:21:38.261965  519649 addons.go:153] Setting addon metrics-server=true in "no-preload-20220325020326-262786"
	I0325 02:21:38.261933  519649 addons.go:65] Setting dashboard=true in profile "no-preload-20220325020326-262786"
	W0325 02:21:38.261977  519649 addons.go:165] addon metrics-server should already be in state true
	I0325 02:21:38.262018  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	W0325 02:21:38.261970  519649 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:21:38.262134  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.261981  519649 addons.go:153] Setting addon dashboard=true in "no-preload-20220325020326-262786"
	W0325 02:21:38.262196  519649 addons.go:165] addon dashboard should already be in state true
	I0325 02:21:38.262244  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.261943  519649 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.262309  519649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220325020326-262786"
	I0325 02:21:38.260052  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:21:38.262573  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262579  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262610  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262698  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.272707  519649 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:21:38.320596  519649 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:21:38.320821  519649 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:21:38.320836  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:21:38.320907  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.321013  519649 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220325020326-262786"
	W0325 02:21:38.321039  519649 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:21:38.321070  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.321575  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.324184  519649 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:21:38.324252  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:21:38.324270  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:21:38.324324  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.336145  519649 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:21:38.337877  519649 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:21:38.337968  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:21:38.337980  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:21:38.338045  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.376075  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.378999  519649 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:21:38.379027  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:21:38.379082  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.384592  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.391085  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.406139  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:21:38.430033  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.505660  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:21:38.505695  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:21:38.510841  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:21:38.602641  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:21:38.602672  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:21:38.694575  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:21:38.694613  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:21:38.696025  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:21:38.696050  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:21:38.705044  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:21:38.789746  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:21:38.789782  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:21:38.791823  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:21:38.813086  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:21:38.813128  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:21:38.895062  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:21:38.895094  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:21:38.912219  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:21:38.912252  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:21:39.000012  519649 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0325 02:21:39.085188  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:21:39.085284  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:21:39.190895  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:21:39.190929  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:21:39.210367  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:21:39.210397  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:21:39.285312  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:21:39.285346  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:21:39.306663  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:21:39.525639  519649 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220325020326-262786"
	I0325 02:21:40.286516  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:40.404818  519649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.098109992s)
	I0325 02:21:40.407835  519649 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0325 02:21:40.407870  519649 addons.go:417] enableAddons completed in 2.14803176s
	I0325 02:21:42.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:44.779767  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:47.280211  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:49.779262  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:51.779687  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:54.279848  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:56.280050  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:58.779731  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:00.780260  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:03.279281  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:05.279729  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:07.279906  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:09.780010  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:12.280241  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:14.779921  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:17.279893  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:19.779940  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:22.280412  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:24.779387  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:26.779919  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:29.279534  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:31.280132  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:33.779899  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:36.280242  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:38.780135  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:41.280030  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:43.780084  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:46.279339  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:48.279930  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:50.779251  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:52.780056  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:54.780774  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	f5e3884eab777       6de166512aa22       2 seconds ago       Running             kindnet-cni               4                   f2bb5089f8868
	246eba7f6d94c       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   f2bb5089f8868
	dd3e42aaf3dd8       9b7cc99821098       12 minutes ago      Running             kube-proxy                0                   b8a442f1cca90
	21482958b68c2       b07520cd7ab76       12 minutes ago      Running             kube-controller-manager   0                   f48ebb07b3e52
	bc6cf9877becc       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   083318a0382f5
	6a469f6f4de50       f40be0088a83e       12 minutes ago      Running             kube-apiserver            0                   79ca704e9271f
	c154a93ac7de2       99a3486be4f28       12 minutes ago      Running             kube-scheduler            0                   259e2071a573d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:10:08 UTC, end at Fri 2022-03-25 02:22:58 UTC. --
	Mar 25 02:16:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:14.042551242Z" level=info msg="RemoveContainer for \"d208a110372dd3afe93f06ac2658cfd92f99ac83bbb21db8d077402fd5871907\" returns successfully"
	Mar 25 02:16:26 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:26.413691505Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:16:26 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:26.428015830Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\""
	Mar 25 02:16:26 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:26.428632011Z" level=info msg="StartContainer for \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\""
	Mar 25 02:16:26 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:16:26.600107317Z" level=info msg="StartContainer for \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\" returns successfully"
	Mar 25 02:19:06 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:06.829730221Z" level=info msg="shim disconnected" id=030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50
	Mar 25 02:19:06 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:06.829803025Z" level=warning msg="cleaning up after shim disconnected" id=030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50 namespace=k8s.io
	Mar 25 02:19:06 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:06.829819468Z" level=info msg="cleaning up dead shim"
	Mar 25 02:19:06 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:06.840242030Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:19:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2545\n"
	Mar 25 02:19:07 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:07.333912812Z" level=info msg="RemoveContainer for \"22b6127ab7f71c86e4615a4dc3e722fd358e082ef1371efb6d3f116104e10ef6\""
	Mar 25 02:19:07 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:07.338288741Z" level=info msg="RemoveContainer for \"22b6127ab7f71c86e4615a4dc3e722fd358e082ef1371efb6d3f116104e10ef6\" returns successfully"
	Mar 25 02:19:34 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:34.413990532Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:19:34 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:34.427091737Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739\""
	Mar 25 02:19:34 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:34.427719553Z" level=info msg="StartContainer for \"246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739\""
	Mar 25 02:19:34 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:19:34.594883412Z" level=info msg="StartContainer for \"246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739\" returns successfully"
	Mar 25 02:22:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:14.829577310Z" level=info msg="shim disconnected" id=246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739
	Mar 25 02:22:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:14.829643054Z" level=warning msg="cleaning up after shim disconnected" id=246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 namespace=k8s.io
	Mar 25 02:22:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:14.829657322Z" level=info msg="cleaning up dead shim"
	Mar 25 02:22:14 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:14.839797570Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:22:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2647\n"
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:15.654850281Z" level=info msg="RemoveContainer for \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\""
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:15.659701357Z" level=info msg="RemoveContainer for \"030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50\" returns successfully"
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:55.414143541Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:55.427009616Z" level=info msg="CreateContainer within sandbox \"f2bb5089f886804080445f284942bb6f294966c3a1448eea2824474138018dc1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db\""
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:55.427567427Z" level=info msg="StartContainer for \"f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db\""
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 containerd[470]: time="2022-03-25T02:22:55.588285208Z" level=info msg="StartContainer for \"f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220325020956-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220325020956-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_10_39_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:10:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220325020956-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:22:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:21:04 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:21:04 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:21:04 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:21:04 +0000   Fri, 25 Mar 2022 02:10:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220325020956-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                3d34c106-4e48-46f4-9bcf-ea4602321294
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.3
	  Kube-Proxy Version:         v1.23.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220325020956-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-kt955                                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220325020956-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220325020956-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-7cpjt                                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220325020956-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x4 over 12m)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7] <==
	* {"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220325020956-262786 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:10:33.406Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.407Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:10:33.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-25T02:10:33.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-03-25T02:15:02.377Z","caller":"traceutil/trace.go:171","msg":"trace[2068280475] linearizableReadLoop","detail":"{readStateIndex:604; appliedIndex:604; }","duration":"150.408748ms","start":"2022-03-25T02:15:02.227Z","end":"2022-03-25T02:15:02.377Z","steps":["trace[2068280475] 'read index received'  (duration: 150.399902ms)","trace[2068280475] 'applied index is now lower than readState.Index'  (duration: 7.441µs)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:15:02.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"173.383294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-25T02:15:02.485Z","caller":"traceutil/trace.go:171","msg":"trace[336424621] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:537; }","duration":"173.538195ms","start":"2022-03-25T02:15:02.312Z","end":"2022-03-25T02:15:02.485Z","steps":["trace[336424621] 'agreement among raft nodes before linearized reading'  (duration: 65.686248ms)","trace[336424621] 'range keys from in-memory index tree'  (duration: 107.671066ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:15:02.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"241.494023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-25T02:15:02.485Z","caller":"traceutil/trace.go:171","msg":"trace[1794756522] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:537; }","duration":"241.70445ms","start":"2022-03-25T02:15:02.243Z","end":"2022-03-25T02:15:02.485Z","steps":["trace[1794756522] 'agreement among raft nodes before linearized reading'  (duration: 133.735426ms)","trace[1794756522] 'count revisions from in-memory index tree'  (duration: 107.74338ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-25T02:15:02.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"258.336191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-03-25T02:15:02.485Z","caller":"traceutil/trace.go:171","msg":"trace[1332604840] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:537; }","duration":"258.680861ms","start":"2022-03-25T02:15:02.227Z","end":"2022-03-25T02:15:02.485Z","steps":["trace[1332604840] 'agreement among raft nodes before linearized reading'  (duration: 150.58741ms)","trace[1332604840] 'count revisions from in-memory index tree'  (duration: 107.724613ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-25T02:20:33.420Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":546}
	{"level":"info","ts":"2022-03-25T02:20:33.421Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":546,"took":"560.549µs"}
	
	* 
	* ==> kernel <==
	*  02:22:58 up  5:00,  0 users,  load average: 0.45, 0.62, 1.06
	Linux default-k8s-different-port-20220325020956-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182] <==
	* I0325 02:10:35.407779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0325 02:10:35.407821       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0325 02:10:35.407883       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0325 02:10:35.410667       1 cache.go:39] Caches are synced for autoregister controller
	I0325 02:10:35.418114       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0325 02:10:35.426256       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0325 02:10:36.307036       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0325 02:10:36.307060       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0325 02:10:36.312536       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0325 02:10:36.315494       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0325 02:10:36.315514       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0325 02:10:36.735448       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0325 02:10:36.766331       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0325 02:10:36.903400       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0325 02:10:36.909041       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0325 02:10:36.910002       1 controller.go:611] quota admission added evaluator for: endpoints
	I0325 02:10:36.913660       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0325 02:10:37.498548       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0325 02:10:38.290755       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0325 02:10:38.299864       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0325 02:10:38.310209       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0325 02:10:43.395414       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 02:10:50.755106       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:10:51.255928       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0325 02:10:51.929935       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73] <==
	* I0325 02:10:50.352002       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0325 02:10:50.399012       1 shared_informer.go:247] Caches are synced for expand 
	I0325 02:10:50.400127       1 shared_informer.go:247] Caches are synced for attach detach 
	I0325 02:10:50.402593       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0325 02:10:50.413393       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0325 02:10:50.440429       1 shared_informer.go:247] Caches are synced for stateful set 
	I0325 02:10:50.451558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0325 02:10:50.457285       1 shared_informer.go:247] Caches are synced for namespace 
	I0325 02:10:50.498016       1 shared_informer.go:247] Caches are synced for service account 
	I0325 02:10:50.508551       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:10:50.521736       1 shared_informer.go:247] Caches are synced for disruption 
	I0325 02:10:50.521785       1 disruption.go:371] Sending events to api server.
	I0325 02:10:50.534205       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0325 02:10:50.552612       1 shared_informer.go:247] Caches are synced for deployment 
	I0325 02:10:50.555880       1 shared_informer.go:247] Caches are synced for resource quota 
	I0325 02:10:50.761437       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7cpjt"
	I0325 02:10:50.763623       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kt955"
	I0325 02:10:50.972353       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:10:51.015363       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0325 02:10:51.015391       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0325 02:10:51.258069       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0325 02:10:51.357575       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dpp64"
	I0325 02:10:51.362162       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9tgbz"
	I0325 02:10:51.549492       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0325 02:10:51.558391       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-dpp64"
	
	* 
	* ==> kube-proxy [dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b] <==
	* I0325 02:10:51.903633       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0325 02:10:51.903717       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0325 02:10:51.903776       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:10:51.926345       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:10:51.926371       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:10:51.926379       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:10:51.926398       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:10:51.926824       1 server.go:656] "Version info" version="v1.23.3"
	I0325 02:10:51.927429       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:10:51.927435       1 config.go:317] "Starting service config controller"
	I0325 02:10:51.927463       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:10:51.927465       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:10:52.028308       1 shared_informer.go:247] Caches are synced for service config 
	I0325 02:10:52.028348       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd] <==
	* W0325 02:10:35.393403       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:10:35.393411       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:35.393426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:10:35.393427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:35.393780       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:35.393803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:35.393937       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 02:10:35.393971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0325 02:10:35.394024       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0325 02:10:35.394054       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:10:35.394671       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:10:35.394703       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:10:35.394724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:10:35.394701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:10:35.394676       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:10:35.394772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:10:36.234022       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:10:36.234107       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0325 02:10:36.361824       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:10:36.361852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0325 02:10:36.372015       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:10:36.372056       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:10:36.495928       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0325 02:10:36.495976       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0325 02:10:38.389536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:10:08 UTC, end at Fri 2022-03-25 02:22:58 UTC. --
	Mar 25 02:21:33 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:33.775065    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:38 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:38.776646    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:43 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:43.778077    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:48 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:48.778777    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:53 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:53.779629    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:21:58 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:21:58.780561    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:03 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:03.782282    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:08 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:08.783038    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:13 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:13.784390    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:15.653678    1287 scope.go:110] "RemoveContainer" containerID="030de033d0939ded8e5344d90e7d56927ace37474be7e3f274dda51d9fa71a50"
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:15.654021    1287 scope.go:110] "RemoveContainer" containerID="246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	Mar 25 02:22:15 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:15.654340    1287 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-kt955_kube-system(87a42b24-60b7-415b-abc9-e574262093c0)\"" pod="kube-system/kindnet-kt955" podUID=87a42b24-60b7-415b-abc9-e574262093c0
	Mar 25 02:22:18 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:18.785813    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:23 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:23.787083    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:28 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:28.411755    1287 scope.go:110] "RemoveContainer" containerID="246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	Mar 25 02:22:28 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:28.412103    1287 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-kt955_kube-system(87a42b24-60b7-415b-abc9-e574262093c0)\"" pod="kube-system/kindnet-kt955" podUID=87a42b24-60b7-415b-abc9-e574262093c0
	Mar 25 02:22:28 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:28.788025    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:33 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:33.789332    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:38 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:38.790632    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:40 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:40.411738    1287 scope.go:110] "RemoveContainer" containerID="246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	Mar 25 02:22:40 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:40.412029    1287 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-kt955_kube-system(87a42b24-60b7-415b-abc9-e574262093c0)\"" pod="kube-system/kindnet-kt955" podUID=87a42b24-60b7-415b-abc9-e574262093c0
	Mar 25 02:22:43 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:43.791933    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:48 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:48.793335    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:53 default-k8s-different-port-20220325020956-262786 kubelet[1287]: E0325 02:22:53.794429    1287 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:22:55 default-k8s-different-port-20220325020956-262786 kubelet[1287]: I0325 02:22:55.411684    1287 scope.go:110] "RemoveContainer" containerID="246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-64897985d-9tgbz storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 describe pod busybox coredns-64897985d-9tgbz storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod busybox coredns-64897985d-9tgbz storage-provisioner: exit status 1 (62.995663ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dwnt4 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-dwnt4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  51s (x8 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9tgbz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod busybox coredns-64897985d-9tgbz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (544.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220325020326-262786 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0
E0325 02:16:41.516573  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-20220325020326-262786 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0: exit status 80 (9m2.867410938s)

                                                
                                                
-- stdout --
	* [no-preload-20220325020326-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node no-preload-20220325020326-262786 in cluster no-preload-20220325020326-262786
	* Pulling base image ...
	* Restarting existing docker container for "no-preload-20220325020326-262786" ...
	* Preparing Kubernetes v1.23.4-rc.0 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.3.1
	  - Using image k8s.gcr.io/echoserver:1.4
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 02:16:35.482311  519649 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:16:35.482451  519649 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:35.482462  519649 out.go:310] Setting ErrFile to fd 2...
	I0325 02:16:35.482467  519649 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:16:35.482575  519649 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:16:35.482813  519649 out.go:304] Setting JSON to false
	I0325 02:16:35.484309  519649 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17668,"bootTime":1648156928,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:16:35.484382  519649 start.go:125] virtualization: kvm guest
	I0325 02:16:35.487068  519649 out.go:176] * [no-preload-20220325020326-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:16:35.488730  519649 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:16:35.487298  519649 notify.go:193] Checking for updates...
	I0325 02:16:35.490311  519649 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:16:35.491877  519649 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:35.493486  519649 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:16:35.495057  519649 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:16:35.496266  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:35.497491  519649 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:16:35.540694  519649 docker.go:136] docker version: linux-20.10.14
	I0325 02:16:35.540841  519649 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:35.641548  519649 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:35.575580325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:16:35.641678  519649 docker.go:253] overlay module found
	I0325 02:16:35.644240  519649 out.go:176] * Using the docker driver based on existing profile
	I0325 02:16:35.644293  519649 start.go:284] selected driver: docker
	I0325 02:16:35.644302  519649 start.go:801] validating driver "docker" against &{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:35.644458  519649 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:16:35.644501  519649 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:35.644530  519649 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 02:16:35.646030  519649 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:35.646742  519649 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:16:35.752278  519649 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 02:16:35.682730162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:16:35.752465  519649 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:16:35.752492  519649 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 02:16:35.754658  519649 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:16:35.754778  519649 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:16:35.754810  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:16:35.754821  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:35.754840  519649 start_flags.go:304] config:
	{Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:35.756791  519649 out.go:176] * Starting control plane node no-preload-20220325020326-262786 in cluster no-preload-20220325020326-262786
	I0325 02:16:35.756829  519649 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:16:35.758358  519649 out.go:176] * Pulling base image ...
	I0325 02:16:35.758390  519649 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:35.758492  519649 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:16:35.758563  519649 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:16:35.758688  519649 cache.go:107] acquiring lock: {Name:mkadc5033eb4d9179acd1c6e7ff0e25d4981568c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758710  519649 cache.go:107] acquiring lock: {Name:mk0987b0339865c5416a6746bce8670ad78c0a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758707  519649 cache.go:107] acquiring lock: {Name:mkdc6a82c5ad28a9b97463884b87944eaef2fef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758830  519649 cache.go:107] acquiring lock: {Name:mk140b8e2c06d387b642b813a7efd82a9f19d6c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758829  519649 cache.go:107] acquiring lock: {Name:mk8ed79f1ecf0bc83b0d3ead06534032f65db356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758880  519649 cache.go:107] acquiring lock: {Name:mkcb4c0577b6fb6a4cc15cd1cfc04742789dcc24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758920  519649 cache.go:107] acquiring lock: {Name:mk1134717661547774a1dd6d6e2854162646543d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.758911  519649 cache.go:107] acquiring lock: {Name:mk61dd10aefdeb5283d07e3024688797852e36d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759022  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0325 02:16:35.759030  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 exists
	I0325 02:16:35.759047  519649 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 372.469µs
	I0325 02:16:35.759047  519649 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0" took 131.834µs
	I0325 02:16:35.759061  519649 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0325 02:16:35.759064  519649 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.4-rc.0 succeeded
	I0325 02:16:35.758904  519649 cache.go:107] acquiring lock: {Name:mkcf6d57389d13d4e31240b1cdf9af5455cf82f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759073  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 exists
	I0325 02:16:35.759078  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0325 02:16:35.759099  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0325 02:16:35.759090  519649 cache.go:107] acquiring lock: {Name:mkd382d09a068cdb98cdc085f7d3d174faef8f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.759109  519649 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 210.056µs
	I0325 02:16:35.759116  519649 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0325 02:16:35.759104  519649 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 350.331µs
	I0325 02:16:35.759086  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0325 02:16:35.759124  519649 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0325 02:16:35.759102  519649 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0" took 354.111µs
	I0325 02:16:35.759149  519649 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759143  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 exists
	I0325 02:16:35.759144  519649 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 439.796µs
	I0325 02:16:35.759168  519649 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0325 02:16:35.759127  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0325 02:16:35.759167  519649 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0" took 339.705µs
	I0325 02:16:35.759178  519649 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759105  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0325 02:16:35.759188  519649 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 362.557µs
	I0325 02:16:35.759203  519649 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0325 02:16:35.759199  519649 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 504.454µs
	I0325 02:16:35.759217  519649 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0325 02:16:35.759228  519649 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 exists
	I0325 02:16:35.759276  519649 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.4-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0" took 279.744µs
	I0325 02:16:35.759305  519649 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.4-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.4-rc.0 succeeded
	I0325 02:16:35.759331  519649 cache.go:87] Successfully saved all images to host disk.
	I0325 02:16:35.794208  519649 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:16:35.794250  519649 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:16:35.794266  519649 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:16:35.794300  519649 start.go:348] acquiring machines lock for no-preload-20220325020326-262786: {Name:mk0b68e00c1687cd51ada59f78a2181cd58687dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:16:35.794388  519649 start.go:352] acquired machines lock for "no-preload-20220325020326-262786" in 69.622µs
	I0325 02:16:35.794408  519649 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:16:35.794412  519649 fix.go:55] fixHost starting: 
	I0325 02:16:35.794639  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:16:35.829675  519649 fix.go:108] recreateIfNeeded on no-preload-20220325020326-262786: state=Stopped err=<nil>
	W0325 02:16:35.829710  519649 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:16:35.833187  519649 out.go:176] * Restarting existing docker container for "no-preload-20220325020326-262786" ...
	I0325 02:16:35.833270  519649 cli_runner.go:133] Run: docker start no-preload-20220325020326-262786
	I0325 02:16:36.223867  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:16:36.260748  519649 kic.go:420] container "no-preload-20220325020326-262786" state is running.
	I0325 02:16:36.261158  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:36.295907  519649 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/config.json ...
	I0325 02:16:36.296110  519649 machine.go:88] provisioning docker machine ...
	I0325 02:16:36.296134  519649 ubuntu.go:169] provisioning hostname "no-preload-20220325020326-262786"
	I0325 02:16:36.296174  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:36.331323  519649 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:36.331546  519649 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49589 <nil> <nil>}
	I0325 02:16:36.331564  519649 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20220325020326-262786 && echo "no-preload-20220325020326-262786" | sudo tee /etc/hostname
	I0325 02:16:36.332175  519649 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50526->127.0.0.1:49589: read: connection reset by peer
	I0325 02:16:39.464533  519649 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20220325020326-262786
	
	I0325 02:16:39.464619  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:39.500131  519649 main.go:130] libmachine: Using SSH client type: native
	I0325 02:16:39.500311  519649 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49589 <nil> <nil>}
	I0325 02:16:39.500341  519649 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220325020326-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220325020326-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220325020326-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:16:39.619029  519649 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:16:39.619064  519649 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:16:39.619085  519649 ubuntu.go:177] setting up certificates
	I0325 02:16:39.619100  519649 provision.go:83] configureAuth start
	I0325 02:16:39.619161  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:39.653347  519649 provision.go:138] copyHostCerts
	I0325 02:16:39.653407  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:16:39.653421  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:16:39.653484  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:16:39.653581  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:16:39.653592  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:16:39.653616  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:16:39.653673  519649 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:16:39.653687  519649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:16:39.653707  519649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:16:39.653765  519649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220325020326-262786 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220325020326-262786]
	I0325 02:16:39.955829  519649 provision.go:172] copyRemoteCerts
	I0325 02:16:39.955898  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:16:39.955933  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:39.989898  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.079856  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0325 02:16:40.099567  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:16:40.119824  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0325 02:16:40.140874  519649 provision.go:86] duration metric: configureAuth took 521.759605ms
	I0325 02:16:40.140906  519649 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:16:40.141163  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:16:40.141185  519649 machine.go:91] provisioned docker machine in 3.845060196s
	I0325 02:16:40.141193  519649 start.go:302] post-start starting for "no-preload-20220325020326-262786" (driver="docker")
	I0325 02:16:40.141201  519649 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:16:40.141260  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:16:40.141308  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.180699  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.271442  519649 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:16:40.274944  519649 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:16:40.275028  519649 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:16:40.275041  519649 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:16:40.275051  519649 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:16:40.275064  519649 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:16:40.275115  519649 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:16:40.275176  519649 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:16:40.275263  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:16:40.282729  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:40.301545  519649 start.go:305] post-start completed in 160.334219ms
	I0325 02:16:40.301629  519649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:16:40.301692  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.340243  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.427579  519649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:16:40.431311  519649 fix.go:57] fixHost completed within 4.636891748s
	I0325 02:16:40.431332  519649 start.go:81] releasing machines lock for "no-preload-20220325020326-262786", held for 4.636932836s
	I0325 02:16:40.431419  519649 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220325020326-262786
	I0325 02:16:40.471929  519649 ssh_runner.go:195] Run: systemctl --version
	I0325 02:16:40.471972  519649 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:16:40.471994  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.472031  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:16:40.514344  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.516013  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:16:40.624849  519649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:16:40.637160  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:16:40.647198  519649 docker.go:183] disabling docker service ...
	I0325 02:16:40.647293  519649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:16:40.657506  519649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:16:40.667205  519649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:16:40.756526  519649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:16:40.838425  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:16:40.849201  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:16:40.862764  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:16:40.877296  519649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:16:40.884604  519649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:16:40.891942  519649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:16:40.968097  519649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:16:41.042195  519649 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:16:41.042340  519649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:16:41.046206  519649 start.go:462] Will wait 60s for crictl version
	I0325 02:16:41.046277  519649 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:41.069914  519649 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:16:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:16:52.117787  519649 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:16:52.144102  519649 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:16:52.144170  519649 ssh_runner.go:195] Run: containerd --version
	I0325 02:16:52.168021  519649 ssh_runner.go:195] Run: containerd --version
	I0325 02:16:52.192255  519649 out.go:176] * Preparing Kubernetes v1.23.4-rc.0 on containerd 1.5.10 ...
	I0325 02:16:52.192348  519649 cli_runner.go:133] Run: docker network inspect no-preload-20220325020326-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:16:52.228171  519649 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0325 02:16:52.231817  519649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:16:52.244329  519649 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:16:52.244416  519649 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 02:16:52.244468  519649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:16:52.271321  519649 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:16:52.271344  519649 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:16:52.271385  519649 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:16:52.298329  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:16:52.298360  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:16:52.298373  519649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:16:52.298389  519649 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220325020326-262786 NodeName:no-preload-20220325020326-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:16:52.298577  519649 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220325020326-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:16:52.298682  519649 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220325020326-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0325 02:16:52.298747  519649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4-rc.0
	I0325 02:16:52.306846  519649 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:16:52.306918  519649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:16:52.315084  519649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0325 02:16:52.328704  519649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0325 02:16:52.342299  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0325 02:16:52.355577  519649 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:16:52.358463  519649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:16:52.367826  519649 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786 for IP: 192.168.67.2
	I0325 02:16:52.367934  519649 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:16:52.367989  519649 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:16:52.368051  519649 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.key
	I0325 02:16:52.368101  519649 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key.c7fa3a9e
	I0325 02:16:52.368132  519649 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key
	I0325 02:16:52.368232  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:16:52.368263  519649 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:16:52.368275  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:16:52.368299  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:16:52.368335  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:16:52.368357  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:16:52.368397  519649 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:16:52.368977  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:16:52.386350  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0325 02:16:52.404078  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:16:52.422535  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:16:52.441293  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:16:52.458689  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:16:52.476708  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:16:52.494410  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:16:52.511769  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:16:52.529287  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:16:52.546092  519649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:16:52.562842  519649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:16:52.574641  519649 ssh_runner.go:195] Run: openssl version
	I0325 02:16:52.579369  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:16:52.586915  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.590088  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.590144  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:16:52.595082  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:16:52.601804  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:16:52.608863  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.611860  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.611906  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:16:52.616573  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:16:52.622899  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:16:52.629919  519649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.632815  519649 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.632859  519649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:16:52.637417  519649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:16:52.644239  519649 kubeadm.go:391] StartCluster: {Name:no-preload-20220325020326-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:no-preload-20220325020326-262786 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:16:52.644354  519649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:16:52.644394  519649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:52.669210  519649 cri.go:87] found id: "e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	I0325 02:16:52.669242  519649 cri.go:87] found id: "0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc"
	I0325 02:16:52.669249  519649 cri.go:87] found id: "ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252"
	I0325 02:16:52.669254  519649 cri.go:87] found id: "fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b"
	I0325 02:16:52.669270  519649 cri.go:87] found id: "e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710"
	I0325 02:16:52.669279  519649 cri.go:87] found id: "b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244"
	I0325 02:16:52.669283  519649 cri.go:87] found id: ""
	I0325 02:16:52.669324  519649 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:16:52.683722  519649 cri.go:114] JSON = null
	W0325 02:16:52.683785  519649 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0325 02:16:52.683838  519649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:16:52.690850  519649 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:16:52.690872  519649 kubeadm.go:601] restartCluster start
	I0325 02:16:52.690912  519649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:16:52.697516  519649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:52.698228  519649 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220325020326-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:16:52.698600  519649 kubeconfig.go:127] "no-preload-20220325020326-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:16:52.699273  519649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:16:52.700696  519649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:16:52.707667  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:52.707717  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:52.715666  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:52.916102  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:52.916184  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:52.925481  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.116769  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.116855  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.125381  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.316671  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.316772  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.325189  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.516483  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.516581  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.525793  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.716104  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.716183  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.724648  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:53.915849  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:53.915940  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:53.924616  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.115776  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.115861  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.124538  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.316714  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.316801  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.325601  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.515836  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.515913  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.524158  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.716463  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.716549  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.725607  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:54.915823  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:54.915903  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:54.924487  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.116802  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.116901  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.126160  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.316446  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.316526  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.324891  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.516554  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.516656  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.525265  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.716429  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.716509  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.725617  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.725645  519649 api_server.go:165] Checking apiserver status ...
	I0325 02:16:55.725683  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:16:55.733139  519649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.733164  519649 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:16:55.733174  519649 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:16:55.733193  519649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:16:55.733247  519649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:16:55.758794  519649 cri.go:87] found id: "e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741"
	I0325 02:16:55.758826  519649 cri.go:87] found id: "0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc"
	I0325 02:16:55.758835  519649 cri.go:87] found id: "ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252"
	I0325 02:16:55.758843  519649 cri.go:87] found id: "fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b"
	I0325 02:16:55.758852  519649 cri.go:87] found id: "e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710"
	I0325 02:16:55.758860  519649 cri.go:87] found id: "b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244"
	I0325 02:16:55.758867  519649 cri.go:87] found id: ""
	I0325 02:16:55.758874  519649 cri.go:232] Stopping containers: [e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252 fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710 b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244]
	I0325 02:16:55.758928  519649 ssh_runner.go:195] Run: which crictl
	I0325 02:16:55.762024  519649 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e938c238f422e7ad2c1f852f7f7c99cc44f076a452650a7f715edf95e2118741 0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc ca6eb75c498fb5c2b059fbf58d7bed65bcd0726d43ee6e9807919af7e6556252 fad18b6ff5e71e43bc6a547fdb395ce6b994e5a50e89314a8fa86e8be772aa3b e6d0357cdf9c298347920771d4f76826f2d16c3d0962a86217262e44f649d710 b96c3eba0f9adf49a6ea2b6617d2354e974495a9aa18e33562840ff338b2e244
	I0325 02:16:55.786603  519649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:16:55.796385  519649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:16:55.803085  519649 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 25 02:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Mar 25 02:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:03 /etc/kubernetes/scheduler.conf
	
	I0325 02:16:55.803151  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0325 02:16:55.809939  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0325 02:16:55.816507  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0325 02:16:55.822744  519649 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.822807  519649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:16:55.828985  519649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0325 02:16:55.835918  519649 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:16:55.835967  519649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:16:55.843105  519649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:55.850384  519649 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:16:55.850419  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:55.893825  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.667540  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.802771  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.854899  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:16:56.922247  519649 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:16:56.922327  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:57.431777  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:57.932218  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:58.431927  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:58.931629  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:59.432174  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:16:59.932237  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:00.431697  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:00.932213  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:01.431617  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:01.931744  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.431861  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.931562  519649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:17:02.996665  519649 api_server.go:71] duration metric: took 6.074430006s to wait for apiserver process to appear ...
	I0325 02:17:02.996706  519649 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:17:02.996721  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:02.997178  519649 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0325 02:17:03.497954  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:06.096426  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:17:06.096466  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:17:06.497872  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:06.502718  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:17:06.502746  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:17:06.998348  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:07.002908  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:17:07.002934  519649 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:17:07.497481  519649 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0325 02:17:07.502551  519649 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0325 02:17:07.508747  519649 api_server.go:140] control plane version: v1.23.4-rc.0
	I0325 02:17:07.508776  519649 api_server.go:130] duration metric: took 4.512062997s to wait for apiserver health ...
	I0325 02:17:07.508793  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:17:07.508800  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:17:07.511699  519649 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:17:07.511795  519649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:17:07.515865  519649 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:17:07.515896  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:17:07.530511  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:17:08.432775  519649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:17:08.439909  519649 system_pods.go:59] 9 kube-system pods found
	I0325 02:17:08.439946  519649 system_pods.go:61] "coredns-64897985d-b9827" [29b80e2f-89fe-4b4a-a931-333a59535d4c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.439962  519649 system_pods.go:61] "etcd-no-preload-20220325020326-262786" [add71311-f324-4612-b981-ca42b0ef813c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:17:08.439971  519649 system_pods.go:61] "kindnet-nhlsm" [57939cf7-016c-486a-8a08-466ff1515c1f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:17:08.439977  519649 system_pods.go:61] "kube-apiserver-no-preload-20220325020326-262786" [f9b1f749-8d63-446e-bd36-152e849a5bf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:17:08.439990  519649 system_pods.go:61] "kube-controller-manager-no-preload-20220325020326-262786" [a229a2c1-6ed0-434a-8b3c-7951beee3fe0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:17:08.439994  519649 system_pods.go:61] "kube-proxy-l6tg2" [f41c6b8d-0d57-4096-af80-8e9a7da29b60] Running
	I0325 02:17:08.440003  519649 system_pods.go:61] "kube-scheduler-no-preload-20220325020326-262786" [a41de5aa-8f3c-46cd-bc8e-85c035c31512] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0325 02:17:08.440012  519649 system_pods.go:61] "metrics-server-b955d9d8-dzczk" [5c06ad70-f575-44ee-8a14-d4d2b172ccf2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.440019  519649 system_pods.go:61] "storage-provisioner" [d778a38b-7ebf-4a50-956a-6628a9055852] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:17:08.440027  519649 system_pods.go:74] duration metric: took 7.223437ms to wait for pod list to return data ...
	I0325 02:17:08.440037  519649 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:17:08.443080  519649 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:17:08.443104  519649 node_conditions.go:123] node cpu capacity is 8
	I0325 02:17:08.443116  519649 node_conditions.go:105] duration metric: took 3.071905ms to run NodePressure ...
	I0325 02:17:08.443134  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:17:08.590505  519649 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:17:08.611319  519649 kubeadm.go:752] kubelet initialised
	I0325 02:17:08.611346  519649 kubeadm.go:753] duration metric: took 20.794737ms waiting for restarted kubelet to initialise ...
	I0325 02:17:08.611354  519649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:17:08.617229  519649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" ...
	I0325 02:17:10.623188  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:13.123899  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:15.623504  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:18.123637  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:20.623486  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:22.624740  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:24.625363  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:27.123366  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:29.123949  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:31.623836  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:34.123164  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:36.123971  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:38.623418  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:40.623650  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:43.124505  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:45.624087  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:48.123363  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:50.623592  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:52.624829  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:55.124055  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:57.623248  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:17:59.623684  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:01.623899  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:04.123560  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:06.124019  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:08.623070  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:11.123374  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:13.623289  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:15.623672  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:18.124412  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:20.624197  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:23.123807  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:25.124272  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:27.624274  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:30.123559  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:32.623099  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:34.623275  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:36.623368  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:38.623990  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:40.624162  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:43.123758  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:45.623667  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:47.623731  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:50.123654  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:52.623485  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:54.623818  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:57.123496  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:18:59.124157  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:01.623917  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:04.123410  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:06.124235  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:08.124325  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:10.623795  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:13.123199  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:15.124279  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:17.623867  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:20.124329  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:22.622920  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:24.623325  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:26.623622  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:29.123797  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:31.124139  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:33.623203  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:35.623380  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:37.623882  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:40.123633  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:42.124935  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:44.622980  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:46.623461  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:48.623960  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:51.123453  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:53.124042  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:55.623040  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:57.623769  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:19:59.624128  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:02.123372  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:04.623176  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:06.623948  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:09.123779  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:11.124054  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:13.624042  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:16.123406  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:18.124039  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:20.124112  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:22.623270  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:24.623999  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:27.123242  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:29.124370  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:31.124412  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:33.623358  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:36.123946  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:38.124288  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:40.623896  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:43.123554  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:45.124025  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:47.623811  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:50.123422  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:52.123897  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:54.124053  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:56.624021  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:20:59.123559  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:01.124195  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:03.623329  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:05.623709  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:08.123740  519649 pod_ready.go:102] pod "coredns-64897985d-b9827" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:04:17 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:21:08.621002  519649 pod_ready.go:81] duration metric: took 4m0.003733568s waiting for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" ...
	E0325 02:21:08.621038  519649 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-b9827" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:21:08.621065  519649 pod_ready.go:38] duration metric: took 4m0.009701445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:21:08.621095  519649 kubeadm.go:605] restartCluster took 4m15.930218796s
	W0325 02:21:08.621264  519649 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:21:08.621308  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:21:10.388277  519649 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.766939087s)
	I0325 02:21:10.388356  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:21:10.397928  519649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:21:10.405143  519649 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:21:10.405196  519649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:21:10.412369  519649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:21:10.412423  519649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:21:23.839012  519649 out.go:203]   - Generating certificates and keys ...
	I0325 02:21:23.841551  519649 out.go:203]   - Booting up control plane ...
	I0325 02:21:23.844819  519649 out.go:203]   - Configuring RBAC rules ...
	I0325 02:21:23.846446  519649 cni.go:93] Creating CNI manager for ""
	I0325 02:21:23.846463  519649 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:21:23.848159  519649 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:21:23.848260  519649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:21:23.851792  519649 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl ...
	I0325 02:21:23.851811  519649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:21:23.864694  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:21:24.545001  519649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:21:24.545086  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:24.545087  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=no-preload-20220325020326-262786 minikube.k8s.io/updated_at=2022_03_25T02_21_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:24.552352  519649 ops.go:34] apiserver oom_adj: -16
	I0325 02:21:24.617795  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:25.174278  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:25.675236  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:26.175029  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:26.674497  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:27.174775  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:27.674258  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:28.174824  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:28.674646  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:29.174252  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:29.675260  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:30.175187  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:30.674792  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:31.174185  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:31.674250  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:32.174501  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:32.675112  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:33.174579  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:33.674182  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:34.174816  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:34.674733  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:35.174444  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:35.675064  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:36.174387  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:36.674259  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.174753  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.675061  519649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:21:37.741138  519649 kubeadm.go:1020] duration metric: took 13.196118254s to wait for elevateKubeSystemPrivileges.
	I0325 02:21:37.741171  519649 kubeadm.go:393] StartCluster complete in 4m45.096948299s
	I0325 02:21:37.741190  519649 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:21:37.741314  519649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:21:37.742545  519649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:21:38.259722  519649 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220325020326-262786" rescaled to 1
	I0325 02:21:38.259791  519649 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:21:38.261749  519649 out.go:176] * Verifying Kubernetes components...
	I0325 02:21:38.259824  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:21:38.261828  519649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:21:38.259842  519649 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:21:38.261923  519649 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.261953  519649 addons.go:65] Setting metrics-server=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.261962  519649 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220325020326-262786"
	I0325 02:21:38.261965  519649 addons.go:153] Setting addon metrics-server=true in "no-preload-20220325020326-262786"
	I0325 02:21:38.261933  519649 addons.go:65] Setting dashboard=true in profile "no-preload-20220325020326-262786"
	W0325 02:21:38.261977  519649 addons.go:165] addon metrics-server should already be in state true
	I0325 02:21:38.262018  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	W0325 02:21:38.261970  519649 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:21:38.262134  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.261981  519649 addons.go:153] Setting addon dashboard=true in "no-preload-20220325020326-262786"
	W0325 02:21:38.262196  519649 addons.go:165] addon dashboard should already be in state true
	I0325 02:21:38.262244  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.261943  519649 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220325020326-262786"
	I0325 02:21:38.262309  519649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220325020326-262786"
	I0325 02:21:38.260052  519649 config.go:176] Loaded profile config "no-preload-20220325020326-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
	I0325 02:21:38.262573  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262579  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262610  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.262698  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.272707  519649 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:21:38.320596  519649 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:21:38.320821  519649 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:21:38.320836  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:21:38.320907  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.321013  519649 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220325020326-262786"
	W0325 02:21:38.321039  519649 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:21:38.321070  519649 host.go:66] Checking if "no-preload-20220325020326-262786" exists ...
	I0325 02:21:38.321575  519649 cli_runner.go:133] Run: docker container inspect no-preload-20220325020326-262786 --format={{.State.Status}}
	I0325 02:21:38.324184  519649 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:21:38.324252  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:21:38.324270  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:21:38.324324  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.336145  519649 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:21:38.337877  519649 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:21:38.337968  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:21:38.337980  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:21:38.338045  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.376075  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.378999  519649 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:21:38.379027  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:21:38.379082  519649 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220325020326-262786
	I0325 02:21:38.384592  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.391085  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.406139  519649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:21:38.430033  519649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49589 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220325020326-262786/id_rsa Username:docker}
	I0325 02:21:38.505660  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:21:38.505695  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:21:38.510841  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:21:38.602641  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:21:38.602672  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:21:38.694575  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:21:38.694613  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:21:38.696025  519649 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:21:38.696050  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:21:38.705044  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:21:38.789746  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:21:38.789782  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:21:38.791823  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:21:38.813086  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:21:38.813128  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:21:38.895062  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:21:38.895094  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:21:38.912219  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:21:38.912252  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:21:39.000012  519649 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0325 02:21:39.085188  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:21:39.085284  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:21:39.190895  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:21:39.190929  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:21:39.210367  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:21:39.210397  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:21:39.285312  519649 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:21:39.285346  519649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:21:39.306663  519649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:21:39.525639  519649 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220325020326-262786"
	I0325 02:21:40.286516  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:40.404818  519649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.098109992s)
	I0325 02:21:40.407835  519649 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0325 02:21:40.407870  519649 addons.go:417] enableAddons completed in 2.14803176s
	I0325 02:21:42.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:44.779767  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:47.280211  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:49.779262  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:51.779687  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:54.279848  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:56.280050  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:21:58.779731  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:00.780260  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:03.279281  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:05.279729  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:07.279906  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:09.780010  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:12.280241  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:14.779921  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:17.279893  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:19.779940  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:22.280412  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:24.779387  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:26.779919  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:29.279534  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:31.280132  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:33.779899  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:36.280242  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:38.780135  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:41.280030  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:43.780084  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:46.279339  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:48.279930  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:50.779251  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:52.780056  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:54.780774  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:57.280721  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:22:59.779152  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:01.780195  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:04.279746  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:06.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:08.779908  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:10.781510  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:13.279624  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:15.280218  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:17.779273  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:19.780082  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:21.780204  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:24.279380  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:26.279462  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:28.279531  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:30.280060  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:32.280264  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:34.779413  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:36.779484  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:38.779979  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:41.279719  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:43.779807  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:46.279223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:48.279265  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:50.280221  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:52.780223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:55.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:57.280104  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:59.779435  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:01.779945  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:03.780076  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:06.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:08.280022  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:10.779769  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:13.279988  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:15.779111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:17.779860  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.282496  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:22.779392  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:24.779583  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:27.280129  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:29.779139  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:31.779438  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:34.279292  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:36.280233  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:38.779288  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:40.779876  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:43.279836  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:45.280111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:47.779037  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:49.779225  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:52.279107  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:54.279992  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:56.280212  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:58.779953  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:00.780066  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:03.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:05.280246  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:07.780397  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:10.279278  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:12.779414  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.279560  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:17.779664  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:19.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:22.279477  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:24.779885  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:27.279254  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:29.280083  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:31.779951  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:34.279813  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:36.279928  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282274  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282298  519649 node_ready.go:38] duration metric: took 4m0.009544217s waiting for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:25:38.285018  519649 out.go:176] 
	W0325 02:25:38.285266  519649 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:25:38.285284  519649 out.go:241] * 
	* 
	W0325 02:25:38.286304  519649 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:25:38.288291  519649 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-20220325020326-262786 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20220325020326-262786
helpers_test.go:236: (dbg) docker inspect no-preload-20220325020326-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778",
	        "Created": "2022-03-25T02:03:28.535684956Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:16:36.215228174Z",
	            "FinishedAt": "2022-03-25T02:16:34.946901711Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hosts",
	        "LogPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778-json.log",
	        "Name": "/no-preload-20220325020326-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220325020326-262786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220325020326-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220325020326-262786",
	                "Source": "/var/lib/docker/volumes/no-preload-20220325020326-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220325020326-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "name.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6303b0899d592874666e828efb3ee58ea54941cfc0221c7bfbcf1da545710660",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49589"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49588"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49585"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49587"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49586"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6303b0899d59",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220325020326-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f52c20ff4ed",
	                        "no-preload-20220325020326-262786"
	                    ],
	                    "NetworkID": "6fbac9304f70e9e85060797caa05d374912c7ea43808a752012c2c1abc994540",
	                    "EndpointID": "1abb4df8a1d7575cd25c1506b8c27a4565a5cebbb3cb9e69805ea68a845231d8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220325020326-262786 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:253: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| unpause | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:49 UTC | Fri, 25 Mar 2022 02:14:50 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:52 UTC | Fri, 25 Mar 2022 02:14:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:51 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:23 UTC | Fri, 25 Mar 2022 02:16:24 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:24 UTC | Fri, 25 Mar 2022 02:16:25 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:25 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:35 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:46 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:47 UTC | Fri, 25 Mar 2022 02:16:48 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:48 UTC | Fri, 25 Mar 2022 02:16:51 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:51 UTC | Fri, 25 Mar 2022 02:16:52 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:19:35 UTC | Fri, 25 Mar 2022 02:19:36 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:55 UTC | Fri, 25 Mar 2022 02:22:56 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:57 UTC | Fri, 25 Mar 2022 02:22:58 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:22:59 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:23:09 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:23:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:23:09.537576  530227 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:23:09.537696  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537706  530227 out.go:310] Setting ErrFile to fd 2...
	I0325 02:23:09.537710  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537815  530227 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:23:09.538048  530227 out.go:304] Setting JSON to false
	I0325 02:23:09.539384  530227 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":18062,"bootTime":1648156928,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:23:09.539464  530227 start.go:125] virtualization: kvm guest
	I0325 02:23:09.542093  530227 out.go:176] * [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:23:09.543709  530227 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:23:09.542258  530227 notify.go:193] Checking for updates...
	I0325 02:23:09.545591  530227 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:23:09.547307  530227 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:09.548939  530227 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:23:09.550462  530227 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:23:09.550916  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:09.551395  530227 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:23:09.596032  530227 docker.go:136] docker version: linux-20.10.14
	I0325 02:23:09.596139  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.694688  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.627733687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:23:09.694822  530227 docker.go:253] overlay module found
	I0325 02:23:09.697284  530227 out.go:176] * Using the docker driver based on existing profile
	I0325 02:23:09.697314  530227 start.go:284] selected driver: docker
	I0325 02:23:09.697321  530227 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956
-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.697441  530227 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:23:09.697477  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.697500  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.699359  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.700002  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.794728  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.730700135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:23:09.794990  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.795026  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.797186  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.797321  530227 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:23:09.797348  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:09.797358  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:09.797376  530227 start_flags.go:304] config:
	{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNod
eRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.799343  530227 out.go:176] * Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	I0325 02:23:09.799390  530227 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:23:09.800868  530227 out.go:176] * Pulling base image ...
	I0325 02:23:09.800894  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:09.800929  530227 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:23:09.800950  530227 cache.go:57] Caching tarball of preloaded images
	I0325 02:23:09.800988  530227 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:23:09.801249  530227 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:23:09.801271  530227 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:23:09.801464  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:09.836753  530227 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:23:09.836785  530227 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:23:09.836808  530227 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:23:09.836875  530227 start.go:348] acquiring machines lock for default-k8s-different-port-20220325020956-262786: {Name:mk1740da455fcceda9a6f7400776a3a68790d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:23:09.836992  530227 start.go:352] acquired machines lock for "default-k8s-different-port-20220325020956-262786" in 82.748µs
	I0325 02:23:09.837017  530227 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:23:09.837034  530227 fix.go:55] fixHost starting: 
	I0325 02:23:09.837307  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:09.870534  530227 fix.go:108] recreateIfNeeded on default-k8s-different-port-20220325020956-262786: state=Stopped err=<nil>
	W0325 02:23:09.870565  530227 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:23:06.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:08.779908  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:09.872836  530227 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220325020956-262786" ...
	I0325 02:23:09.872897  530227 cli_runner.go:133] Run: docker start default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.277624  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:10.313461  530227 kic.go:420] container "default-k8s-different-port-20220325020956-262786" state is running.
	I0325 02:23:10.314041  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.349467  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:10.349684  530227 machine.go:88] provisioning docker machine ...
	I0325 02:23:10.349734  530227 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220325020956-262786"
	I0325 02:23:10.349784  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.385648  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:10.385835  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:10.385854  530227 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220325020956-262786 && echo "default-k8s-different-port-20220325020956-262786" | sudo tee /etc/hostname
	I0325 02:23:10.386524  530227 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33004->127.0.0.1:49594: read: connection reset by peer
	I0325 02:23:13.516245  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220325020956-262786
	
	I0325 02:23:13.516321  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.552077  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:13.552283  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:13.552307  530227 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220325020956-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220325020956-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220325020956-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:23:13.671145  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:23:13.671181  530227 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:23:13.671209  530227 ubuntu.go:177] setting up certificates
	I0325 02:23:13.671220  530227 provision.go:83] configureAuth start
	I0325 02:23:13.671284  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.707509  530227 provision.go:138] copyHostCerts
	I0325 02:23:13.707567  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:23:13.707583  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:23:13.707654  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:23:13.707752  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:23:13.707763  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:23:13.707785  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:23:13.707835  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:23:13.707843  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:23:13.707863  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:23:13.707902  530227 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220325020956-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220325020956-262786]
	I0325 02:23:13.801684  530227 provision.go:172] copyRemoteCerts
	I0325 02:23:13.801761  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:23:13.801796  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.837900  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:13.926796  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:23:13.945040  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:23:13.962557  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0325 02:23:13.980609  530227 provision.go:86] duration metric: configureAuth took 309.376559ms
	I0325 02:23:13.980640  530227 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:23:13.980824  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:13.980838  530227 machine.go:91] provisioned docker machine in 3.631132536s
	I0325 02:23:13.980846  530227 start.go:302] post-start starting for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:23:13.980853  530227 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:23:13.980892  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:23:13.980932  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.016302  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.102734  530227 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:23:14.105732  530227 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:23:14.105760  530227 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:23:14.105786  530227 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:23:14.105795  530227 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:23:14.105810  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:23:14.105871  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:23:14.105966  530227 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:23:14.106069  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:23:14.113216  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:14.131102  530227 start.go:305] post-start completed in 150.235781ms
	I0325 02:23:14.131193  530227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:23:14.131252  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.166319  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.255555  530227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:23:14.259268  530227 fix.go:57] fixHost completed within 4.422236664s
	I0325 02:23:14.259296  530227 start.go:81] releasing machines lock for "default-k8s-different-port-20220325020956-262786", held for 4.422290413s
	I0325 02:23:14.259383  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295568  530227 ssh_runner.go:195] Run: systemctl --version
	I0325 02:23:14.295622  530227 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:23:14.295624  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295670  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.331630  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.332124  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.440710  530227 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:23:14.453593  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:23:14.463531  530227 docker.go:183] disabling docker service ...
	I0325 02:23:14.463587  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:23:14.473649  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:23:14.482885  530227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:23:10.781510  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:13.279624  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:15.280218  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:14.552504  530227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:23:14.625188  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:23:14.634619  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:23:14.648987  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:23:14.662584  530227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:23:14.669661  530227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:23:14.676535  530227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:23:14.749687  530227 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:23:14.824010  530227 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:23:14.824124  530227 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:23:14.828479  530227 start.go:462] Will wait 60s for crictl version
	I0325 02:23:14.828546  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:14.854134  530227 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:23:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:23:17.779273  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:19.780082  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:21.780204  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:24.279380  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:25.901131  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:25.924531  530227 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:23:25.924599  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.944738  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.965406  530227 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:23:25.965490  530227 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:23:25.998365  530227 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 02:23:26.001776  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.013555  530227 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:23:26.013655  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:26.013730  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.037965  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.037994  530227 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:23:26.038048  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.062141  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.062166  530227 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:23:26.062213  530227 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:23:26.086309  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:26.086334  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:26.086348  530227 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:23:26.086361  530227 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220325020956-262786 NodeName:default-k8s-different-port-20220325020956-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:23:26.086482  530227 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220325020956-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:23:26.086574  530227 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220325020956-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0325 02:23:26.086621  530227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:23:26.093791  530227 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:23:26.093861  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:23:26.101104  530227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0325 02:23:26.114154  530227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:23:26.127481  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0325 02:23:26.139891  530227 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:23:26.142699  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.151979  530227 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786 for IP: 192.168.49.2
	I0325 02:23:26.152115  530227 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:23:26.152173  530227 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:23:26.152283  530227 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key
	I0325 02:23:26.152367  530227 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2
	I0325 02:23:26.152432  530227 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key
	I0325 02:23:26.152572  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:23:26.152618  530227 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:23:26.152633  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:23:26.152719  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:23:26.152762  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:23:26.152796  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:23:26.152856  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:26.153663  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:23:26.170543  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:23:26.188516  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:23:26.206252  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:23:26.223851  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:23:26.240997  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:23:26.258925  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:23:26.276782  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:23:26.293956  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:23:26.311184  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:23:26.328788  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:23:26.345739  530227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:23:26.358217  530227 ssh_runner.go:195] Run: openssl version
	I0325 02:23:26.363310  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:23:26.371143  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374386  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374446  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.379667  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:23:26.386880  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:23:26.394406  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397558  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397619  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.402576  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:23:26.409580  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:23:26.416799  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419794  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419843  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.424480  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:23:26.431093  530227 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:26.431219  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:23:26.431267  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.455469  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:26.455495  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:26.455501  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:26.455506  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:26.455510  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:26.455515  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:26.455520  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:26.455524  530227 cri.go:87] found id: ""
	I0325 02:23:26.455562  530227 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:23:26.469264  530227 cri.go:114] JSON = null
	W0325 02:23:26.469319  530227 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0325 02:23:26.469383  530227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:23:26.476380  530227 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:23:26.476423  530227 kubeadm.go:601] restartCluster start
	I0325 02:23:26.476467  530227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:23:26.483313  530227 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.484051  530227 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220325020956-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:26.484409  530227 kubeconfig.go:127] "default-k8s-different-port-20220325020956-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:23:26.485050  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:23:26.486481  530227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:23:26.493604  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.493676  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.502078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.702482  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.702567  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.712014  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.902246  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.902320  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.910978  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.103208  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.103289  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.111964  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.303121  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.303213  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.312214  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.502493  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.502598  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.511468  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.702747  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.702890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.711697  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.902931  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.903050  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.912319  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.102538  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.102634  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.111710  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.303008  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.303080  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.312078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.502221  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.502313  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.511095  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.702230  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.702303  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.711103  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.902322  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.902413  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.911515  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.102704  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.102774  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.111434  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.302770  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.302858  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.311706  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.503069  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.503150  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.512690  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.512721  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.512770  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.521635  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.521669  530227 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:23:29.521677  530227 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:23:29.521695  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:23:29.521749  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.279462  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:28.279531  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:30.280060  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:29.546890  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:29.546921  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:29.546927  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:29.546932  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:29.546937  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:29.546942  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:29.546946  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:29.546979  530227 cri.go:87] found id: ""
	I0325 02:23:29.546987  530227 cri.go:232] Stopping containers: [f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd]
	I0325 02:23:29.547049  530227 ssh_runner.go:195] Run: which crictl
	I0325 02:23:29.550389  530227 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd
	I0325 02:23:29.575922  530227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:23:29.586795  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:23:29.594440  530227 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 25 02:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2131 Mar 25 02:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:10 /etc/kubernetes/scheduler.conf
	
	I0325 02:23:29.594520  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0325 02:23:29.601472  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0325 02:23:29.608305  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.615261  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.615319  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.622383  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0325 02:23:29.629095  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.629161  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:23:29.636095  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642934  530227 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642998  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:29.687932  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.297307  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.428688  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.476555  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.528341  530227 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:23:30.528397  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.037340  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.536903  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.037557  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.537100  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.037156  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.537124  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.037604  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.536762  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.280264  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:34.779413  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:35.037573  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:35.536890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.037157  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.536733  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.598317  530227 api_server.go:71] duration metric: took 6.069979844s to wait for apiserver process to appear ...
	I0325 02:23:36.598362  530227 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:23:36.598380  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.598866  530227 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0325 02:23:37.099575  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.779484  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:38.779979  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:40.211650  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:23:40.211687  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:23:40.599053  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:40.603812  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:40.603846  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.099269  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.104481  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:41.104517  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.599902  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.604945  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0325 02:23:41.612918  530227 api_server.go:140] control plane version: v1.23.3
	I0325 02:23:41.612944  530227 api_server.go:130] duration metric: took 5.014575703s to wait for apiserver health ...
	I0325 02:23:41.612957  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:41.612965  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:41.615242  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:23:41.615325  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:23:41.619644  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:23:41.619669  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:23:41.633910  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:23:42.356822  530227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:23:42.365307  530227 system_pods.go:59] 9 kube-system pods found
	I0325 02:23:42.365343  530227 system_pods.go:61] "coredns-64897985d-9tgbz" [0d638e01-927d-4431-bf10-393b424f801a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365353  530227 system_pods.go:61] "etcd-default-k8s-different-port-20220325020956-262786" [10e10258-89d5-423b-850f-60ef4b12b83a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:23:42.365361  530227 system_pods.go:61] "kindnet-kt955" [87a42b24-60b7-415b-abc9-e574262093c0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:23:42.365368  530227 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220325020956-262786" [877f6ccd-dcc7-47ff-8574-9b9ec1b05a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:23:42.365376  530227 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220325020956-262786" [cbd16e08-169e-458a-b9c2-bcaa627475cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:23:42.365382  530227 system_pods.go:61] "kube-proxy-7cpjt" [6d1657ba-6fcd-4ee8-8293-b6aa0b7e1fb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:23:42.365387  530227 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220325020956-262786" [7b21b770-272f-4183-a1e4-6cca761e7be8] Running
	I0325 02:23:42.365395  530227 system_pods.go:61] "metrics-server-b955d9d8-h94qn" [f250996f-f9e2-41f2-ba86-6da05d627811] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365401  530227 system_pods.go:61] "storage-provisioner" [1f4e27b1-94bb-49ed-b16e-7237ce00c11a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365409  530227 system_pods.go:74] duration metric: took 8.560724ms to wait for pod list to return data ...
	I0325 02:23:42.365419  530227 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:23:42.368395  530227 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:23:42.368426  530227 node_conditions.go:123] node cpu capacity is 8
	I0325 02:23:42.368439  530227 node_conditions.go:105] duration metric: took 3.013418ms to run NodePressure ...
	I0325 02:23:42.368460  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:42.498603  530227 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503044  530227 kubeadm.go:752] kubelet initialised
	I0325 02:23:42.503087  530227 kubeadm.go:753] duration metric: took 4.396508ms waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503097  530227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:23:42.508446  530227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	I0325 02:23:44.514894  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:41.279719  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:43.779807  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:46.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:49.014836  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:46.279223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:48.279265  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:50.280221  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:51.514564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:54.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:52.780223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:55.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:56.514871  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:59.014358  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:57.280104  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:59.779435  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:01.015007  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:03.514691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:01.779945  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:03.780076  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:05.515135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:08.014925  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:06.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:08.280022  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:10.514744  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:12.514875  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:10.779769  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:13.279988  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:15.014427  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:17.514431  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:15.779111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:17.779860  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.282496  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.015198  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.514500  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.779392  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:24.779583  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:25.014188  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.015284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:29.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.280129  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:29.779139  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:32.015294  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:34.514331  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:31.779438  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:34.279292  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:36.514446  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:39.014203  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:36.280233  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:38.779288  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:41.015081  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:43.515133  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:40.779876  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:43.279836  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:45.280111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:46.014807  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:48.513848  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:47.779037  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:49.779225  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:50.514522  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:53.014610  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:52.279107  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:54.279992  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:55.514800  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:58.014633  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:56.280212  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:58.779953  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:00.514555  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:02.514600  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:04.514849  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:00.780066  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:03.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:05.280246  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:07.014221  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:09.014509  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:07.780397  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:10.279278  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:11.014691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:13.014798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:12.779414  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.279560  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.514210  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.514263  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:19.515014  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.779664  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:19.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:22.014469  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:24.015322  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:22.279477  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:24.779885  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:26.514766  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:29.014967  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:27.279254  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:29.280083  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:31.514230  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:34.014655  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:31.779951  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:34.279813  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:36.279928  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282274  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282298  519649 node_ready.go:38] duration metric: took 4m0.009544217s waiting for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:25:38.285018  519649 out.go:176] 
	W0325 02:25:38.285266  519649 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:25:38.285284  519649 out.go:241] * 
	W0325 02:25:38.286304  519649 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e236231390656       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   884eb4334953e
	ab3bef1048aec       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   884eb4334953e
	d6f58f8b25dd7       abbcf459c7739       4 minutes ago        Running             kube-proxy                0                   ca7a34b0094a0
	53b9a35e53a1f       25f8c7f3da61c       4 minutes ago        Running             etcd                      2                   05bf41c7a933c
	86d75965d4f3a       4a82fd4414312       4 minutes ago        Running             kube-scheduler            2                   803cf0205f95a
	b4eeb80b5bb17       9f243260866d4       4 minutes ago        Running             kube-controller-manager   2                   59e731f150549
	b82e990d40b98       ce3b8500a91ff       4 minutes ago        Running             kube-apiserver            2                   0dea622433793
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:16:36 UTC, end at Fri 2022-03-25 02:25:39 UTC. --
	Mar 25 02:21:37 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:37.966203061Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-pqqft,Uid:4bc6dee7-b939-402e-bc62-74ce9f083e11,Namespace:kube-system,Attempt:0,}"
	Mar 25 02:21:37 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:37.984051753Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814 pid=3328
	Mar 25 02:21:37 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:37.985591167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca7a34b0094a0e57a63fa3d855e34798c433f2c6aa7edb0e158a965ce8e41399 pid=3338
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.045855606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5l9q,Uid:4b4e7516-83ab-4ae3-b2bd-4ba1d4635c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca7a34b0094a0e57a63fa3d855e34798c433f2c6aa7edb0e158a965ce8e41399\""
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.052027835Z" level=info msg="CreateContainer within sandbox \"ca7a34b0094a0e57a63fa3d855e34798c433f2c6aa7edb0e158a965ce8e41399\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.066076742Z" level=info msg="CreateContainer within sandbox \"ca7a34b0094a0e57a63fa3d855e34798c433f2c6aa7edb0e158a965ce8e41399\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d6f58f8b25dd7a12dcd4bbe7b98e14edf89e41d8ed965c5f4afe0581e5dd0409\""
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.066716220Z" level=info msg="StartContainer for \"d6f58f8b25dd7a12dcd4bbe7b98e14edf89e41d8ed965c5f4afe0581e5dd0409\""
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.140416331Z" level=info msg="StartContainer for \"d6f58f8b25dd7a12dcd4bbe7b98e14edf89e41d8ed965c5f4afe0581e5dd0409\" returns successfully"
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.306357379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-pqqft,Uid:4bc6dee7-b939-402e-bc62-74ce9f083e11,Namespace:kube-system,Attempt:0,} returns sandbox id \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\""
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.309968888Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.329617140Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"ab3bef1048aec67b49b09949f76284299ab614b53e9aac670758dd1698bbffd5\""
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.330465099Z" level=info msg="StartContainer for \"ab3bef1048aec67b49b09949f76284299ab614b53e9aac670758dd1698bbffd5\""
	Mar 25 02:21:38 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:21:38.693506435Z" level=info msg="StartContainer for \"ab3bef1048aec67b49b09949f76284299ab614b53e9aac670758dd1698bbffd5\" returns successfully"
	Mar 25 02:22:28 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:22:28.728556912Z" level=error msg="ContainerStatus for \"a01e8f7a9cac01f446dba4b29b2c7ed71446ebab909fcc7f9840588b2a8361a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a01e8f7a9cac01f446dba4b29b2c7ed71446ebab909fcc7f9840588b2a8361a9\": not found"
	Mar 25 02:22:28 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:22:28.729134129Z" level=error msg="ContainerStatus for \"a2d2c68d4ad1ec1804689e0373a731fdf98758cb3d206e4e5b8775f2c7da187b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2d2c68d4ad1ec1804689e0373a731fdf98758cb3d206e4e5b8775f2c7da187b\": not found"
	Mar 25 02:22:28 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:22:28.729657160Z" level=error msg="ContainerStatus for \"0d2d11270b9869812878586feddd26e58310a6b4b08dbd27adbbcfb979cc3f58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2d11270b9869812878586feddd26e58310a6b4b08dbd27adbbcfb979cc3f58\": not found"
	Mar 25 02:22:28 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:22:28.730154801Z" level=error msg="ContainerStatus for \"0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f40034eeb6e50c94f067c7a8f1614d00b3bda869ebcc9628fdbf3b23070afdc\": not found"
	Mar 25 02:24:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:24:19.029355529Z" level=info msg="shim disconnected" id=ab3bef1048aec67b49b09949f76284299ab614b53e9aac670758dd1698bbffd5
	Mar 25 02:24:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:24:19.029408211Z" level=warning msg="cleaning up after shim disconnected" id=ab3bef1048aec67b49b09949f76284299ab614b53e9aac670758dd1698bbffd5 namespace=k8s.io
	Mar 25 02:24:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:24:19.029422793Z" level=info msg="cleaning up dead shim"
	Mar 25 02:24:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:24:19.041791836Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:24:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3824\n"
	Mar 25 02:24:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:24:19.267239698Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Mar 25 02:24:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:24:19.279777678Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"e236231390656f33bbcd559aed98e81d9cf6191805f4007c8ab3fc429df08e37\""
	Mar 25 02:24:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:24:19.280226347Z" level=info msg="StartContainer for \"e236231390656f33bbcd559aed98e81d9cf6191805f4007c8ab3fc429df08e37\""
	Mar 25 02:24:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:24:19.488833777Z" level=info msg="StartContainer for \"e236231390656f33bbcd559aed98e81d9cf6191805f4007c8ab3fc429df08e37\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220325020326-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220325020326-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=no-preload-20220325020326-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_21_24_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:21:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220325020326-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:25:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:21:36 +0000   Fri, 25 Mar 2022 02:21:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:21:36 +0000   Fri, 25 Mar 2022 02:21:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:21:36 +0000   Fri, 25 Mar 2022 02:21:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:21:36 +0000   Fri, 25 Mar 2022 02:21:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220325020326-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                38254055-e8ea-4285-a000-185429061264
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.4-rc.0
	  Kube-Proxy Version:         v1.23.4-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220325020326-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kindnet-pqqft                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-no-preload-20220325020326-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-no-preload-20220325020326-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-l5l9q                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-no-preload-20220325020326-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m11s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s  kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s  kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s  kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [53b9a35e53a1f11832bf97ad9473cce2d2bb0222ec7087b5d2bb35f4d7e6ed23] <==
	* {"level":"info","ts":"2022-03-25T02:21:17.716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-03-25T02:21:17.716Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220325020326-262786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:21:18.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-03-25T02:21:18.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  02:25:39 up  5:03,  0 users,  load average: 0.19, 0.45, 0.92
	Linux no-preload-20220325020326-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b82e990d40b983d4de17f25adbeb2fe89b571c39a24ee87a05b194ed981e9d4b] <==
	* I0325 02:21:22.021308       1 controller.go:611] quota admission added evaluator for: endpoints
	I0325 02:21:22.024899       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0325 02:21:22.702412       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0325 02:21:23.642233       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0325 02:21:23.648660       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0325 02:21:23.657621       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0325 02:21:28.802904       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 02:21:37.618891       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:21:37.667775       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0325 02:21:38.203285       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0325 02:21:39.518185       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.109.176.183]
	I0325 02:21:40.387842       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.222.173]
	I0325 02:21:40.399053       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.101.45.129]
	W0325 02:21:40.402102       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:21:40.402187       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:21:40.402205       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:22:40.402480       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:22:40.402544       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:22:40.402552       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:24:40.403116       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:24:40.403205       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:24:40.403227       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [b4eeb80b5bb17a15e68bd5bb2307122772dcbffee65dfef90ef3c519989b81c6] <==
	* E0325 02:21:40.219402       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0325 02:21:40.222723       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0325 02:21:40.222837       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:21:40.222877       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0325 02:21:40.222901       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0325 02:21:40.286464       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:21:40.286514       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0325 02:21:40.290577       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-wpjtl"
	I0325 02:21:40.310022       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-g7pm5"
	E0325 02:22:06.991162       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:22:07.402108       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:22:37.012361       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:22:37.416730       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:23:07.033460       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:23:07.431335       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:23:37.057557       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:23:37.445932       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:24:07.076451       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:24:07.462142       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:24:37.095298       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:24:37.477579       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:25:07.112918       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:25:07.493032       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:25:37.129306       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:25:37.508823       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d6f58f8b25dd7a12dcd4bbe7b98e14edf89e41d8ed965c5f4afe0581e5dd0409] <==
	* I0325 02:21:38.177110       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0325 02:21:38.177160       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0325 02:21:38.177191       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:21:38.199792       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:21:38.199842       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:21:38.199853       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:21:38.199881       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:21:38.200324       1 server.go:656] "Version info" version="v1.23.4-rc.0"
	I0325 02:21:38.200994       1 config.go:317] "Starting service config controller"
	I0325 02:21:38.201026       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:21:38.201438       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:21:38.201453       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:21:38.301474       1 shared_informer.go:247] Caches are synced for service config 
	I0325 02:21:38.301561       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [86d75965d4f3ab5cae35b8697279da65f0af1b8e291f65a2f57058b7c1595521] <==
	* E0325 02:21:20.686752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:21:20.686680       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:21:20.686797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0325 02:21:20.686791       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0325 02:21:20.686834       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:21:20.686848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:21:20.686850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:21:20.687001       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:21:20.687026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:21:20.687025       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:21:20.687064       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:21:21.494111       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0325 02:21:21.494161       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0325 02:21:21.514545       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:21:21.514615       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0325 02:21:21.516409       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:21:21.516444       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:21:21.518452       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:21:21.518502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:21:21.523480       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0325 02:21:21.523518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:21:21.526456       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:21:21.526482       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:21:23.057524       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0325 02:21:23.605054       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:16:36 UTC, end at Fri 2022-03-25 02:25:39 UTC. --
	Mar 25 02:23:44 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:23:44.024945    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:23:49 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:23:49.025949    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:23:54 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:23:54.026804    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:23:59 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:23:59.027683    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:04 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:04.029257    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:09 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:09.029871    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:14 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:14.031367    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:19 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:19.032268    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:19 no-preload-20220325020326-262786 kubelet[2906]: I0325 02:24:19.265346    2906 scope.go:110] "RemoveContainer" containerID="ab3bef1048aec67b49b09949f76284299ab614b53e9aac670758dd1698bbffd5"
	Mar 25 02:24:24 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:24.033664    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:29 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:29.034437    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:34 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:34.035920    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:39 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:39.037093    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:44 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:44.038262    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:49 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:49.039666    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:54 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:54.041145    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:24:59 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:24:59.042829    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:25:04 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:25:04.044317    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:25:09 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:25:09.045310    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:25:14 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:25:14.046730    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:25:19 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:25:19.048000    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:25:24 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:25:24.049477    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:25:29 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:25:29.050854    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:25:34 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:25:34.052046    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:25:39 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:25:39.053121    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-kj92r metrics-server-b955d9d8-9fbns storage-provisioner dashboard-metrics-scraper-56974995fc-wpjtl kubernetes-dashboard-ccd587f44-g7pm5
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-kj92r metrics-server-b955d9d8-9fbns storage-provisioner dashboard-metrics-scraper-56974995fc-wpjtl kubernetes-dashboard-ccd587f44-g7pm5
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-kj92r metrics-server-b955d9d8-9fbns storage-provisioner dashboard-metrics-scraper-56974995fc-wpjtl kubernetes-dashboard-ccd587f44-g7pm5: exit status 1 (59.892669ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-kj92r" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-9fbns" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-wpjtl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-g7pm5" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-kj92r metrics-server-b955d9d8-9fbns storage-provisioner dashboard-metrics-scraper-56974995fc-wpjtl kubernetes-dashboard-ccd587f44-g7pm5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (544.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-m7r5f" [cbd9ac63-ff4c-4653-a967-7e3c503561b6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
E0325 02:19:40.294257  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 02:20:48.155343  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 02:21:00.415431  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 02:21:05.106049  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 02:21:12.031863  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:22:35.078745  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:22:37.498517  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:23:30.839270  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:23:47.791036  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:23:56.093801  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:23:57.674190  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:24:40.295052  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:26:00.415348  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:26:05.106030  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:26:12.032640  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:27:37.498536  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:259: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:259: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
start_stop_delete_test.go:259: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-03-25 02:28:37.201866642 +0000 UTC m=+4241.527995631
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 describe po kubernetes-dashboard-766959b846-m7r5f -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220325015306-262786 describe po kubernetes-dashboard-766959b846-m7r5f -n kubernetes-dashboard: context deadline exceeded (1.437µs)
start_stop_delete_test.go:259: kubectl --context old-k8s-version-20220325015306-262786 describe po kubernetes-dashboard-766959b846-m7r5f -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 logs kubernetes-dashboard-766959b846-m7r5f -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220325015306-262786 logs kubernetes-dashboard-766959b846-m7r5f -n kubernetes-dashboard: context deadline exceeded (171ns)
start_stop_delete_test.go:259: kubectl --context old-k8s-version-20220325015306-262786 logs kubernetes-dashboard-766959b846-m7r5f -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220325015306-262786
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220325015306-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b",
	        "Created": "2022-03-25T01:56:43.297059247Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:09:40.440687134Z",
	            "FinishedAt": "2022-03-25T02:09:39.001215404Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hosts",
	        "LogPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b-json.log",
	        "Name": "/old-k8s-version-20220325015306-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220325015306-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220325015306-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220325015306-262786",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220325015306-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220325015306-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9df45f20e753a3442300e56c843c60395eccdf6e8a137107895ab514717212ce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49569"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49568"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49565"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49567"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49566"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9df45f20e753",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220325015306-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e6a4c0e8f4c7",
	                        "old-k8s-version-20220325015306-262786"
	                    ],
	                    "NetworkID": "739cf1dc095b5d758dfcb21f6f999d4a170c6b33046de4a26204586f05d2d4a4",
	                    "EndpointID": "57c238ff56f27a16e123eaa684322d154da1947f1f6746c5d5637556a31c9292",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220325015306-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:52 UTC | Fri, 25 Mar 2022 02:14:53 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:51 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:23 UTC | Fri, 25 Mar 2022 02:16:24 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:24 UTC | Fri, 25 Mar 2022 02:16:25 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:25 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:35 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:46 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:47 UTC | Fri, 25 Mar 2022 02:16:48 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:48 UTC | Fri, 25 Mar 2022 02:16:51 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:51 UTC | Fri, 25 Mar 2022 02:16:52 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:19:35 UTC | Fri, 25 Mar 2022 02:19:36 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:55 UTC | Fri, 25 Mar 2022 02:22:56 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:57 UTC | Fri, 25 Mar 2022 02:22:58 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:22:59 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:23:09 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:25:38 UTC | Fri, 25 Mar 2022 02:25:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:23:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:23:09.537576  530227 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:23:09.537696  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537706  530227 out.go:310] Setting ErrFile to fd 2...
	I0325 02:23:09.537710  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537815  530227 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:23:09.538048  530227 out.go:304] Setting JSON to false
	I0325 02:23:09.539384  530227 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":18062,"bootTime":1648156928,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:23:09.539464  530227 start.go:125] virtualization: kvm guest
	I0325 02:23:09.542093  530227 out.go:176] * [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:23:09.543709  530227 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:23:09.542258  530227 notify.go:193] Checking for updates...
	I0325 02:23:09.545591  530227 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:23:09.547307  530227 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:09.548939  530227 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:23:09.550462  530227 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:23:09.550916  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:09.551395  530227 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:23:09.596032  530227 docker.go:136] docker version: linux-20.10.14
	I0325 02:23:09.596139  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.694688  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.627733687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:23:09.694822  530227 docker.go:253] overlay module found
	I0325 02:23:09.697284  530227 out.go:176] * Using the docker driver based on existing profile
	I0325 02:23:09.697314  530227 start.go:284] selected driver: docker
	I0325 02:23:09.697321  530227 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956
-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.697441  530227 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:23:09.697477  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.697500  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.699359  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.700002  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.794728  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.730700135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:23:09.794990  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.795026  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.797186  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.797321  530227 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:23:09.797348  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:09.797358  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:09.797376  530227 start_flags.go:304] config:
	{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNod
eRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.799343  530227 out.go:176] * Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	I0325 02:23:09.799390  530227 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:23:09.800868  530227 out.go:176] * Pulling base image ...
	I0325 02:23:09.800894  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:09.800929  530227 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:23:09.800950  530227 cache.go:57] Caching tarball of preloaded images
	I0325 02:23:09.800988  530227 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:23:09.801249  530227 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:23:09.801271  530227 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:23:09.801464  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:09.836753  530227 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:23:09.836785  530227 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:23:09.836808  530227 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:23:09.836875  530227 start.go:348] acquiring machines lock for default-k8s-different-port-20220325020956-262786: {Name:mk1740da455fcceda9a6f7400776a3a68790d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:23:09.836992  530227 start.go:352] acquired machines lock for "default-k8s-different-port-20220325020956-262786" in 82.748µs
	I0325 02:23:09.837017  530227 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:23:09.837034  530227 fix.go:55] fixHost starting: 
	I0325 02:23:09.837307  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:09.870534  530227 fix.go:108] recreateIfNeeded on default-k8s-different-port-20220325020956-262786: state=Stopped err=<nil>
	W0325 02:23:09.870565  530227 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:23:06.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:08.779908  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:09.872836  530227 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220325020956-262786" ...
	I0325 02:23:09.872897  530227 cli_runner.go:133] Run: docker start default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.277624  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:10.313461  530227 kic.go:420] container "default-k8s-different-port-20220325020956-262786" state is running.
	I0325 02:23:10.314041  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.349467  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:10.349684  530227 machine.go:88] provisioning docker machine ...
	I0325 02:23:10.349734  530227 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220325020956-262786"
	I0325 02:23:10.349784  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.385648  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:10.385835  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:10.385854  530227 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220325020956-262786 && echo "default-k8s-different-port-20220325020956-262786" | sudo tee /etc/hostname
	I0325 02:23:10.386524  530227 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33004->127.0.0.1:49594: read: connection reset by peer
	I0325 02:23:13.516245  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220325020956-262786
	
	I0325 02:23:13.516321  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.552077  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:13.552283  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:13.552307  530227 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220325020956-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220325020956-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220325020956-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:23:13.671145  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:23:13.671181  530227 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:23:13.671209  530227 ubuntu.go:177] setting up certificates
	I0325 02:23:13.671220  530227 provision.go:83] configureAuth start
	I0325 02:23:13.671284  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.707509  530227 provision.go:138] copyHostCerts
	I0325 02:23:13.707567  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:23:13.707583  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:23:13.707654  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:23:13.707752  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:23:13.707763  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:23:13.707785  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:23:13.707835  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:23:13.707843  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:23:13.707863  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:23:13.707902  530227 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220325020956-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220325020956-262786]
	I0325 02:23:13.801684  530227 provision.go:172] copyRemoteCerts
	I0325 02:23:13.801761  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:23:13.801796  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.837900  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:13.926796  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:23:13.945040  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:23:13.962557  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0325 02:23:13.980609  530227 provision.go:86] duration metric: configureAuth took 309.376559ms
	I0325 02:23:13.980640  530227 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:23:13.980824  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:13.980838  530227 machine.go:91] provisioned docker machine in 3.631132536s
	I0325 02:23:13.980846  530227 start.go:302] post-start starting for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:23:13.980853  530227 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:23:13.980892  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:23:13.980932  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.016302  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.102734  530227 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:23:14.105732  530227 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:23:14.105760  530227 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:23:14.105786  530227 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:23:14.105795  530227 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:23:14.105810  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:23:14.105871  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:23:14.105966  530227 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:23:14.106069  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:23:14.113216  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:14.131102  530227 start.go:305] post-start completed in 150.235781ms
	I0325 02:23:14.131193  530227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:23:14.131252  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.166319  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.255555  530227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:23:14.259268  530227 fix.go:57] fixHost completed within 4.422236664s
	I0325 02:23:14.259296  530227 start.go:81] releasing machines lock for "default-k8s-different-port-20220325020956-262786", held for 4.422290413s
	I0325 02:23:14.259383  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295568  530227 ssh_runner.go:195] Run: systemctl --version
	I0325 02:23:14.295622  530227 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:23:14.295624  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295670  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.331630  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.332124  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.440710  530227 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:23:14.453593  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:23:14.463531  530227 docker.go:183] disabling docker service ...
	I0325 02:23:14.463587  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:23:14.473649  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:23:14.482885  530227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:23:10.781510  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:13.279624  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:15.280218  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:14.552504  530227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:23:14.625188  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:23:14.634619  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:23:14.648987  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:23:14.662584  530227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:23:14.669661  530227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:23:14.676535  530227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:23:14.749687  530227 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:23:14.824010  530227 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:23:14.824124  530227 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:23:14.828479  530227 start.go:462] Will wait 60s for crictl version
	I0325 02:23:14.828546  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:14.854134  530227 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:23:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:23:17.779273  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:19.780082  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:21.780204  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:24.279380  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:25.901131  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:25.924531  530227 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:23:25.924599  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.944738  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.965406  530227 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:23:25.965490  530227 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:23:25.998365  530227 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 02:23:26.001776  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.013555  530227 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:23:26.013655  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:26.013730  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.037965  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.037994  530227 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:23:26.038048  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.062141  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.062166  530227 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:23:26.062213  530227 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:23:26.086309  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:26.086334  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:26.086348  530227 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:23:26.086361  530227 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220325020956-262786 NodeName:default-k8s-different-port-20220325020956-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:23:26.086482  530227 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220325020956-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:23:26.086574  530227 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220325020956-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0325 02:23:26.086621  530227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:23:26.093791  530227 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:23:26.093861  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:23:26.101104  530227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0325 02:23:26.114154  530227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:23:26.127481  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0325 02:23:26.139891  530227 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:23:26.142699  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.151979  530227 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786 for IP: 192.168.49.2
	I0325 02:23:26.152115  530227 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:23:26.152173  530227 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:23:26.152283  530227 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key
	I0325 02:23:26.152367  530227 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2
	I0325 02:23:26.152432  530227 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key
	I0325 02:23:26.152572  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:23:26.152618  530227 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:23:26.152633  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:23:26.152719  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:23:26.152762  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:23:26.152796  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:23:26.152856  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:26.153663  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:23:26.170543  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:23:26.188516  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:23:26.206252  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:23:26.223851  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:23:26.240997  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:23:26.258925  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:23:26.276782  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:23:26.293956  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:23:26.311184  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:23:26.328788  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:23:26.345739  530227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:23:26.358217  530227 ssh_runner.go:195] Run: openssl version
	I0325 02:23:26.363310  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:23:26.371143  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374386  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374446  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.379667  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:23:26.386880  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:23:26.394406  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397558  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397619  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.402576  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:23:26.409580  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:23:26.416799  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419794  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419843  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.424480  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:23:26.431093  530227 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:26.431219  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:23:26.431267  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.455469  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:26.455495  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:26.455501  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:26.455506  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:26.455510  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:26.455515  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:26.455520  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:26.455524  530227 cri.go:87] found id: ""
	I0325 02:23:26.455562  530227 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:23:26.469264  530227 cri.go:114] JSON = null
	W0325 02:23:26.469319  530227 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0325 02:23:26.469383  530227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:23:26.476380  530227 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:23:26.476423  530227 kubeadm.go:601] restartCluster start
	I0325 02:23:26.476467  530227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:23:26.483313  530227 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.484051  530227 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220325020956-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:26.484409  530227 kubeconfig.go:127] "default-k8s-different-port-20220325020956-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:23:26.485050  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:23:26.486481  530227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:23:26.493604  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.493676  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.502078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.702482  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.702567  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.712014  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.902246  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.902320  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.910978  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.103208  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.103289  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.111964  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.303121  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.303213  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.312214  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.502493  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.502598  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.511468  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.702747  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.702890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.711697  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.902931  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.903050  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.912319  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.102538  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.102634  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.111710  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.303008  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.303080  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.312078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.502221  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.502313  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.511095  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.702230  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.702303  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.711103  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.902322  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.902413  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.911515  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.102704  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.102774  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.111434  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.302770  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.302858  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.311706  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.503069  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.503150  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.512690  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.512721  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.512770  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.521635  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.521669  530227 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:23:29.521677  530227 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:23:29.521695  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:23:29.521749  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.279462  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:28.279531  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:30.280060  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:29.546890  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:29.546921  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:29.546927  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:29.546932  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:29.546937  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:29.546942  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:29.546946  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:29.546979  530227 cri.go:87] found id: ""
	I0325 02:23:29.546987  530227 cri.go:232] Stopping containers: [f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd]
	I0325 02:23:29.547049  530227 ssh_runner.go:195] Run: which crictl
	I0325 02:23:29.550389  530227 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd
	I0325 02:23:29.575922  530227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:23:29.586795  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:23:29.594440  530227 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 25 02:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2131 Mar 25 02:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:10 /etc/kubernetes/scheduler.conf
	
	I0325 02:23:29.594520  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0325 02:23:29.601472  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0325 02:23:29.608305  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.615261  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.615319  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.622383  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0325 02:23:29.629095  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.629161  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:23:29.636095  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642934  530227 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642998  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:29.687932  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.297307  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.428688  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.476555  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.528341  530227 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:23:30.528397  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.037340  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.536903  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.037557  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.537100  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.037156  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.537124  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.037604  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.536762  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.280264  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:34.779413  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:35.037573  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:35.536890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.037157  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.536733  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.598317  530227 api_server.go:71] duration metric: took 6.069979844s to wait for apiserver process to appear ...
	I0325 02:23:36.598362  530227 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:23:36.598380  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.598866  530227 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0325 02:23:37.099575  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.779484  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:38.779979  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:40.211650  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:23:40.211687  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:23:40.599053  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:40.603812  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:40.603846  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.099269  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.104481  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:41.104517  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.599902  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.604945  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0325 02:23:41.612918  530227 api_server.go:140] control plane version: v1.23.3
	I0325 02:23:41.612944  530227 api_server.go:130] duration metric: took 5.014575703s to wait for apiserver health ...
	I0325 02:23:41.612957  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:41.612965  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:41.615242  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:23:41.615325  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:23:41.619644  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:23:41.619669  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:23:41.633910  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:23:42.356822  530227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:23:42.365307  530227 system_pods.go:59] 9 kube-system pods found
	I0325 02:23:42.365343  530227 system_pods.go:61] "coredns-64897985d-9tgbz" [0d638e01-927d-4431-bf10-393b424f801a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365353  530227 system_pods.go:61] "etcd-default-k8s-different-port-20220325020956-262786" [10e10258-89d5-423b-850f-60ef4b12b83a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:23:42.365361  530227 system_pods.go:61] "kindnet-kt955" [87a42b24-60b7-415b-abc9-e574262093c0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:23:42.365368  530227 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220325020956-262786" [877f6ccd-dcc7-47ff-8574-9b9ec1b05a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:23:42.365376  530227 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220325020956-262786" [cbd16e08-169e-458a-b9c2-bcaa627475cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:23:42.365382  530227 system_pods.go:61] "kube-proxy-7cpjt" [6d1657ba-6fcd-4ee8-8293-b6aa0b7e1fb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:23:42.365387  530227 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220325020956-262786" [7b21b770-272f-4183-a1e4-6cca761e7be8] Running
	I0325 02:23:42.365395  530227 system_pods.go:61] "metrics-server-b955d9d8-h94qn" [f250996f-f9e2-41f2-ba86-6da05d627811] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365401  530227 system_pods.go:61] "storage-provisioner" [1f4e27b1-94bb-49ed-b16e-7237ce00c11a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365409  530227 system_pods.go:74] duration metric: took 8.560724ms to wait for pod list to return data ...
	I0325 02:23:42.365419  530227 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:23:42.368395  530227 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:23:42.368426  530227 node_conditions.go:123] node cpu capacity is 8
	I0325 02:23:42.368439  530227 node_conditions.go:105] duration metric: took 3.013418ms to run NodePressure ...
	I0325 02:23:42.368460  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:42.498603  530227 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503044  530227 kubeadm.go:752] kubelet initialised
	I0325 02:23:42.503087  530227 kubeadm.go:753] duration metric: took 4.396508ms waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503097  530227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:23:42.508446  530227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	I0325 02:23:44.514894  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:41.279719  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:43.779807  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:46.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:49.014836  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:46.279223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:48.279265  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:50.280221  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:51.514564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:54.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:52.780223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:55.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:56.514871  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:59.014358  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:57.280104  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:59.779435  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:01.015007  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:03.514691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:01.779945  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:03.780076  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:05.515135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:08.014925  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:06.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:08.280022  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:10.514744  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:12.514875  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:10.779769  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:13.279988  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:15.014427  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:17.514431  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:15.779111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:17.779860  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.282496  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.015198  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.514500  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.779392  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:24.779583  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:25.014188  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.015284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:29.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.280129  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:29.779139  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:32.015294  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:34.514331  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:31.779438  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:34.279292  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:36.514446  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:39.014203  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:36.280233  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:38.779288  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:41.015081  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:43.515133  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:40.779876  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:43.279836  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:45.280111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:46.014807  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:48.513848  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:47.779037  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:49.779225  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:50.514522  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:53.014610  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:52.279107  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:54.279992  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:55.514800  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:58.014633  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:56.280212  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:58.779953  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:00.514555  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:02.514600  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:04.514849  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:00.780066  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:03.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:05.280246  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:07.014221  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:09.014509  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:07.780397  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:10.279278  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:11.014691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:13.014798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:12.779414  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.279560  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.514210  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.514263  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:19.515014  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.779664  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:19.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:22.014469  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:24.015322  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:22.279477  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:24.779885  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:26.514766  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:29.014967  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:27.279254  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:29.280083  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:31.514230  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:34.014655  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:31.779951  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:34.279813  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:36.279928  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282274  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282298  519649 node_ready.go:38] duration metric: took 4m0.009544217s waiting for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:25:38.285018  519649 out.go:176] 
	W0325 02:25:38.285266  519649 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:25:38.285284  519649 out.go:241] * 
	W0325 02:25:38.286304  519649 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:25:36.513926  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:38.514284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:40.514364  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:42.514810  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:45.014814  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:47.016025  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:49.514217  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:51.514677  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:54.014149  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:56.014605  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:58.514592  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:01.014803  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:03.015044  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:05.514261  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:07.514811  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:10.014055  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:12.015163  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:14.514268  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:16.514780  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:19.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:21.513928  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:23.514819  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:26.015135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:28.514150  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:30.514433  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:32.514515  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:35.014162  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:37.014582  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:39.015192  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:41.514478  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:43.514682  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:46.014076  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:48.014564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:50.015101  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:52.514545  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:55.014470  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:57.514259  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:59.514425  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:01.514567  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:03.514798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:06.014824  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:08.015115  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:10.513891  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:12.514222  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:14.514644  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:17.014325  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:19.514620  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:22.014551  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:24.014603  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:26.015052  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:28.015506  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:30.514915  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:33.014783  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:35.514512  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:38.014733  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:40.514302  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:42.510816  530227 pod_ready.go:81] duration metric: took 4m0.002335219s waiting for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	E0325 02:27:42.510845  530227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:27:42.510866  530227 pod_ready.go:38] duration metric: took 4m0.007755725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:27:42.510971  530227 kubeadm.go:605] restartCluster took 4m16.034541089s
	W0325 02:27:42.511146  530227 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:27:42.511207  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:27:44.339219  530227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.827981438s)
	I0325 02:27:44.339290  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:27:44.348982  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:27:44.356461  530227 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:27:44.356520  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:27:44.363951  530227 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:27:44.364022  530227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:27:57.283699  530227 out.go:203]   - Generating certificates and keys ...
	I0325 02:27:57.286878  530227 out.go:203]   - Booting up control plane ...
	I0325 02:27:57.289872  530227 out.go:203]   - Configuring RBAC rules ...
	I0325 02:27:57.291696  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:27:57.291719  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:27:57.293919  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:27:57.294011  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:27:57.297810  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:27:57.297833  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:27:57.312402  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:27:58.034457  530227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:27:58.034521  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786 minikube.k8s.io/updated_at=2022_03_25T02_27_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.034522  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.101247  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.118657  530227 ops.go:34] apiserver oom_adj: -16
	I0325 02:27:58.688158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.188855  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.688734  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.188158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.688215  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.188912  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.688969  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.188876  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.688656  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.188835  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.688222  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.188154  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.688514  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.188103  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.688209  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.187993  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.688197  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.188677  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.688113  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.187906  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.688331  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.188315  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.688031  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.751277  530227 kubeadm.go:1020] duration metric: took 11.716819332s to wait for elevateKubeSystemPrivileges.
	I0325 02:28:09.751307  530227 kubeadm.go:393] StartCluster complete in 4m43.320221544s
	I0325 02:28:09.751334  530227 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:09.751483  530227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:28:09.752678  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:10.268555  530227 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220325020956-262786" rescaled to 1
	I0325 02:28:10.268633  530227 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:28:10.268674  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:28:10.270943  530227 out.go:176] * Verifying Kubernetes components...
	I0325 02:28:10.268968  530227 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:28:10.269163  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:28:10.271075  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:28:10.271163  530227 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271166  530227 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271183  530227 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271188  530227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271189  530227 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271192  530227 addons.go:165] addon metrics-server should already be in state true
	I0325 02:28:10.271207  530227 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271217  530227 addons.go:165] addon dashboard should already be in state true
	I0325 02:28:10.271232  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271251  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271164  530227 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271302  530227 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271316  530227 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:28:10.271343  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271538  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271833  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.344040  530227 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.344132  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:28:10.344144  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:28:10.344219  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.346679  530227 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:28:10.346811  530227 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.346826  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:28:10.346882  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.353938  530227 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.355562  530227 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:28:10.355640  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:28:10.355656  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:28:10.355719  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.383518  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.385223  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.391464  530227 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.391493  530227 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:28:10.391524  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.392049  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.400074  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.437891  530227 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.437915  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:28:10.437962  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.471205  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.494523  530227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:28:10.494562  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:28:10.609220  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.609589  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:28:10.609655  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:28:10.609633  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:28:10.609758  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:28:10.700787  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:28:10.700823  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:28:10.701805  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:28:10.701834  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:28:10.800343  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:28:10.800381  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:28:10.808905  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.810521  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:10.810550  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:28:10.899094  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:28:10.899126  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:28:10.905212  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:11.003467  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:28:11.003501  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:28:11.102902  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:28:11.102933  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:28:11.203761  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:28:11.203793  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:28:11.210549  530227 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0325 02:28:11.294868  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:28:11.294905  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:28:11.407000  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.407036  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:28:11.594579  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.993918  530227 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08865456s)
	I0325 02:28:11.994023  530227 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:12.406348  530227 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0325 02:28:12.406384  530227 addons.go:417] enableAddons completed in 2.137426118s
	I0325 02:28:12.501452  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:15.001678  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:17.002236  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:19.501942  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:22.002181  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:24.002264  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:26.501667  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:29.002078  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:31.501585  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:34.001805  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bfefe565bdea0       6de166512aa22       56 seconds ago      Running             kindnet-cni               4                   31b95e88dc2af
	c52245b34bae3       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   31b95e88dc2af
	0f0c7b7b9b87a       c21b0c7400f98       13 minutes ago      Running             kube-proxy                0                   4dfb05edc119d
	85b04b6171d3e       06a629a7e51cd       13 minutes ago      Running             kube-controller-manager   0                   711c9a59158f6
	b5876f14d59d1       b2756210eeabf       13 minutes ago      Running             etcd                      0                   5baefcef4d5b1
	3e153bc9be8e3       b305571ca60a5       13 minutes ago      Running             kube-apiserver            0                   dd77722cd3ddb
	df11628d76654       301ddc62b80b1       13 minutes ago      Running             kube-scheduler            0                   0881b8d78c2d7
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:09:40 UTC, end at Fri 2022-03-25 02:28:38 UTC. --
	Mar 25 02:20:56 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:20:56.884497478Z" level=info msg="RemoveContainer for \"a0112f59e2b6fd103d27456f7718a8ab098cd926beeedc18ef55292467ac828d\" returns successfully"
	Mar 25 02:21:09 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:21:09.336653432Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:21:09 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:21:09.351062948Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"749933206c3637dfd76995668e4b945684021202230608ebb5cd74b46105ec49\""
	Mar 25 02:21:09 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:21:09.351531867Z" level=info msg="StartContainer for \"749933206c3637dfd76995668e4b945684021202230608ebb5cd74b46105ec49\""
	Mar 25 02:21:09 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:21:09.507071790Z" level=info msg="StartContainer for \"749933206c3637dfd76995668e4b945684021202230608ebb5cd74b46105ec49\" returns successfully"
	Mar 25 02:23:49 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:23:49.733468376Z" level=info msg="shim disconnected" id=749933206c3637dfd76995668e4b945684021202230608ebb5cd74b46105ec49
	Mar 25 02:23:49 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:23:49.733520150Z" level=warning msg="cleaning up after shim disconnected" id=749933206c3637dfd76995668e4b945684021202230608ebb5cd74b46105ec49 namespace=k8s.io
	Mar 25 02:23:49 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:23:49.733539633Z" level=info msg="cleaning up dead shim"
	Mar 25 02:23:49 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:23:49.744422674Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:23:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5893\n"
	Mar 25 02:23:50 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:23:50.097349748Z" level=info msg="RemoveContainer for \"f6cf87321d1b58562784760c34ba7a202790363bad9de268defd07f3272c67f8\""
	Mar 25 02:23:50 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:23:50.102499864Z" level=info msg="RemoveContainer for \"f6cf87321d1b58562784760c34ba7a202790363bad9de268defd07f3272c67f8\" returns successfully"
	Mar 25 02:24:17 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:24:17.336246293Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:24:17 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:24:17.349801644Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"c52245b34bae304de0cb64ef8c48702e727c7aaeb43c0139734bd38d47a29c67\""
	Mar 25 02:24:17 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:24:17.350275750Z" level=info msg="StartContainer for \"c52245b34bae304de0cb64ef8c48702e727c7aaeb43c0139734bd38d47a29c67\""
	Mar 25 02:24:17 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:24:17.504797050Z" level=info msg="StartContainer for \"c52245b34bae304de0cb64ef8c48702e727c7aaeb43c0139734bd38d47a29c67\" returns successfully"
	Mar 25 02:26:57 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:26:57.821714314Z" level=info msg="shim disconnected" id=c52245b34bae304de0cb64ef8c48702e727c7aaeb43c0139734bd38d47a29c67
	Mar 25 02:26:57 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:26:57.821784064Z" level=warning msg="cleaning up after shim disconnected" id=c52245b34bae304de0cb64ef8c48702e727c7aaeb43c0139734bd38d47a29c67 namespace=k8s.io
	Mar 25 02:26:57 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:26:57.821794684Z" level=info msg="cleaning up dead shim"
	Mar 25 02:26:57 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:26:57.832331331Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:26:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6356\n"
	Mar 25 02:26:58 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:26:58.328512919Z" level=info msg="RemoveContainer for \"749933206c3637dfd76995668e4b945684021202230608ebb5cd74b46105ec49\""
	Mar 25 02:26:58 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:26:58.334379348Z" level=info msg="RemoveContainer for \"749933206c3637dfd76995668e4b945684021202230608ebb5cd74b46105ec49\" returns successfully"
	Mar 25 02:27:41 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:27:41.336790044Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Mar 25 02:27:41 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:27:41.350690390Z" level=info msg="CreateContainer within sandbox \"31b95e88dc2af7fec2970270795b024a3276e9f48c04816cae5385c2cf69b2c4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"bfefe565bdea0cc363ab9509353feae61283e6ba351f9d45dd8086eacda1edcc\""
	Mar 25 02:27:41 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:27:41.351249050Z" level=info msg="StartContainer for \"bfefe565bdea0cc363ab9509353feae61283e6ba351f9d45dd8086eacda1edcc\""
	Mar 25 02:27:41 old-k8s-version-20220325015306-262786 containerd[492]: time="2022-03-25T02:27:41.588925132Z" level=info msg="StartContainer for \"bfefe565bdea0cc363ab9509353feae61283e6ba351f9d45dd8086eacda1edcc\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220325015306-262786
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220325015306-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=old-k8s-version-20220325015306-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_15_19_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:15:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:28:14 +0000   Fri, 25 Mar 2022 02:15:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:28:14 +0000   Fri, 25 Mar 2022 02:15:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:28:14 +0000   Fri, 25 Mar 2022 02:15:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:28:14 +0000   Fri, 25 Mar 2022 02:15:10 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220325015306-262786
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                586019ba-8c2c-445d-9550-f545f1f4ef4d
	 Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	 Kernel Version:             5.13.0-1021-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220325015306-262786                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kindnet-vb8zw                                                    100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                kube-apiserver-old-k8s-version-20220325015306-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-20220325015306-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-w2fhc                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-20220325015306-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                               Message
	  ----    ------                   ----               ----                                               -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-20220325015306-262786     Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-20220325015306-262786  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [b5876f14d59d16a3864556cd668573e5ab98ce4ada95bce268d655a2dedf6463] <==
	* 2022-03-25 02:15:10.602847 I | raft: ea7e25599daad906 became follower at term 0
	2022-03-25 02:15:10.602853 I | raft: newRaft ea7e25599daad906 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-03-25 02:15:10.602857 I | raft: ea7e25599daad906 became follower at term 1
	2022-03-25 02:15:10.609968 W | auth: simple token is not cryptographically signed
	2022-03-25 02:15:10.612943 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-03-25 02:15:10.613254 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-03-25 02:15:10.613542 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2022-03-25 02:15:10.615595 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-03-25 02:15:10.615734 I | embed: listening for metrics on http://192.168.76.2:2381
	2022-03-25 02:15:10.615857 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-03-25 02:15:11.103195 I | raft: ea7e25599daad906 is starting a new election at term 1
	2022-03-25 02:15:11.103233 I | raft: ea7e25599daad906 became candidate at term 2
	2022-03-25 02:15:11.103262 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2022-03-25 02:15:11.103284 I | raft: ea7e25599daad906 became leader at term 2
	2022-03-25 02:15:11.103292 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2022-03-25 02:15:11.103528 I | etcdserver: setting up the initial cluster version to 3.3
	2022-03-25 02:15:11.104488 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-03-25 02:15:11.104548 I | etcdserver/api: enabled capabilities for version 3.3
	2022-03-25 02:15:11.104577 I | etcdserver: published {Name:old-k8s-version-20220325015306-262786 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2022-03-25 02:15:11.104644 I | embed: ready to serve client requests
	2022-03-25 02:15:11.104663 I | embed: ready to serve client requests
	2022-03-25 02:15:11.107618 I | embed: serving client requests on 192.168.76.2:2379
	2022-03-25 02:15:11.108050 I | embed: serving client requests on 127.0.0.1:2379
	2022-03-25 02:25:11.290459 I | mvcc: store.index: compact 563
	2022-03-25 02:25:11.291500 I | mvcc: finished scheduled compaction at 563 (took 646.068µs)
	
	* 
	* ==> kernel <==
	*  02:28:38 up  5:06,  0 users,  load average: 0.21, 0.37, 0.80
	Linux old-k8s-version-20220325015306-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3e153bc9be8e3a8f1bd845e03b812a61f58707416dc259c62fd7639162e71b2e] <==
	* I0325 02:21:15.288844       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0325 02:21:15.288950       1 handler_proxy.go:99] no RequestInfo found in the context
	E0325 02:21:15.289013       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:21:15.289036       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0325 02:23:15.289304       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0325 02:23:15.289388       1 handler_proxy.go:99] no RequestInfo found in the context
	E0325 02:23:15.289452       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:23:15.289465       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0325 02:25:15.290686       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0325 02:25:15.290780       1 handler_proxy.go:99] no RequestInfo found in the context
	E0325 02:25:15.290860       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:25:15.290875       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0325 02:26:15.291081       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0325 02:26:15.291170       1 handler_proxy.go:99] no RequestInfo found in the context
	E0325 02:26:15.291235       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:26:15.291263       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0325 02:28:15.291495       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0325 02:28:15.291593       1 handler_proxy.go:99] no RequestInfo found in the context
	E0325 02:28:15.291680       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:28:15.291698       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [85b04b6171d3e514ba8fe84e6ca60fc75eb1ff0ea1fc607a2b792d171f02a5a0] <==
	* E0325 02:22:07.477525       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:22:30.226869       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:22:37.729203       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:23:02.228480       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:23:07.980873       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:23:34.230222       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:23:38.232419       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:24:06.231987       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:24:08.483972       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:24:38.233551       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:24:38.735675       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0325 02:25:08.987073       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:25:10.235171       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:25:39.238719       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:25:42.236706       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:26:09.490352       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:26:14.238318       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:26:39.741860       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:26:46.239869       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:27:09.993588       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:27:18.241602       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:27:40.245204       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:27:50.243375       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:28:10.497190       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:28:22.244945       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [0f0c7b7b9b87a3fb5fb66ad36820e076e8a71d97b4e72f8850b0f7664c56e904] <==
	* W0325 02:15:34.314982       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0325 02:15:34.321515       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0325 02:15:34.321546       1 server_others.go:149] Using iptables Proxier.
	I0325 02:15:34.321956       1 server.go:529] Version: v1.16.0
	I0325 02:15:34.322406       1 config.go:131] Starting endpoints config controller
	I0325 02:15:34.322437       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0325 02:15:34.322518       1 config.go:313] Starting service config controller
	I0325 02:15:34.322542       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0325 02:15:34.422634       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0325 02:15:34.422699       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [df11628d7665411966cbbb2ac185c87a07097d8b6f2ce2aac3800860bfd82f72] <==
	* I0325 02:15:14.298832       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0325 02:15:14.392493       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:15:14.399607       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:15:14.399693       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 02:15:14.399755       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:15:14.399842       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:15:14.399961       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:15:14.400414       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:15:14.400441       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:15:14.400427       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:15:14.402696       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:15:14.404598       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 02:15:15.393856       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:15:15.400840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:15:15.402274       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0325 02:15:15.403210       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:15:15.404458       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:15:15.405563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:15:15.406904       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:15:15.408069       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:15:15.409089       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0325 02:15:15.410312       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:15:15.411358       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0325 02:15:36.416025       1 factory.go:585] pod is already present in the activeQ
	E0325 02:15:36.936626       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:09:40 UTC, end at Fri 2022-03-25 02:28:38 UTC. --
	Mar 25 02:26:49 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:26:49.591232    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:26:54 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:26:54.592010    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:26:58 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:26:58.328577    3104 pod_workers.go:191] Error syncing pod 444e4f5d-2509-464b-ad2a-252f5a8b7ff2 ("kindnet-vb8zw_kube-system(444e4f5d-2509-464b-ad2a-252f5a8b7ff2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-vb8zw_kube-system(444e4f5d-2509-464b-ad2a-252f5a8b7ff2)"
	Mar 25 02:26:59 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:26:59.592730    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:04 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:04.593468    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:09 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:09.594281    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:12 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:12.333938    3104 pod_workers.go:191] Error syncing pod 444e4f5d-2509-464b-ad2a-252f5a8b7ff2 ("kindnet-vb8zw_kube-system(444e4f5d-2509-464b-ad2a-252f5a8b7ff2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-vb8zw_kube-system(444e4f5d-2509-464b-ad2a-252f5a8b7ff2)"
	Mar 25 02:27:14 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:14.595038    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:19 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:19.595804    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:24 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:24.596537    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:27 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:27.334032    3104 pod_workers.go:191] Error syncing pod 444e4f5d-2509-464b-ad2a-252f5a8b7ff2 ("kindnet-vb8zw_kube-system(444e4f5d-2509-464b-ad2a-252f5a8b7ff2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-vb8zw_kube-system(444e4f5d-2509-464b-ad2a-252f5a8b7ff2)"
	Mar 25 02:27:29 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:29.597176    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:34 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:34.597836    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:39 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:39.598640    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:44 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:44.599449    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:49 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:49.600137    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:54 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:54.600868    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:27:59 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:27:59.601604    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:28:04 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:28:04.602372    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:28:09 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:28:09.603101    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:28:14 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:28:14.603888    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:28:19 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:28:19.604619    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:28:24 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:28:24.605285    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:28:29 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:28:29.606049    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Mar 25 02:28:34 old-k8s-version-20220325015306-262786 kubelet[3104]: E0325 02:28:34.608288    3104 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-5644d7b6d9-sv5bc metrics-server-6f89b5864b-w7k4b storage-provisioner dashboard-metrics-scraper-6b84985989-7n44j kubernetes-dashboard-766959b846-m7r5f
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-sv5bc metrics-server-6f89b5864b-w7k4b storage-provisioner dashboard-metrics-scraper-6b84985989-7n44j kubernetes-dashboard-766959b846-m7r5f
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-sv5bc metrics-server-6f89b5864b-w7k4b storage-provisioner dashboard-metrics-scraper-6b84985989-7n44j kubernetes-dashboard-766959b846-m7r5f: exit status 1 (68.57008ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-sv5bc" not found
	Error from server (NotFound): pods "metrics-server-6f89b5864b-w7k4b" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-7n44j" not found
	Error from server (NotFound): pods "kubernetes-dashboard-766959b846-m7r5f" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-sv5bc metrics-server-6f89b5864b-w7k4b storage-provisioner dashboard-metrics-scraper-6b84985989-7n44j kubernetes-dashboard-766959b846-m7r5f: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (543.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220325020956-262786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220325020956-262786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3: exit status 80 (9m1.03568044s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-different-port-20220325020956-262786" ...
	* Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.3.1
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 02:23:09.537576  530227 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:23:09.537696  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537706  530227 out.go:310] Setting ErrFile to fd 2...
	I0325 02:23:09.537710  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537815  530227 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:23:09.538048  530227 out.go:304] Setting JSON to false
	I0325 02:23:09.539384  530227 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":18062,"bootTime":1648156928,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:23:09.539464  530227 start.go:125] virtualization: kvm guest
	I0325 02:23:09.542093  530227 out.go:176] * [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:23:09.543709  530227 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:23:09.542258  530227 notify.go:193] Checking for updates...
	I0325 02:23:09.545591  530227 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:23:09.547307  530227 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:09.548939  530227 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:23:09.550462  530227 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:23:09.550916  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:09.551395  530227 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:23:09.596032  530227 docker.go:136] docker version: linux-20.10.14
	I0325 02:23:09.596139  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.694688  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.627733687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:23:09.694822  530227 docker.go:253] overlay module found
	I0325 02:23:09.697284  530227 out.go:176] * Using the docker driver based on existing profile
	I0325 02:23:09.697314  530227 start.go:284] selected driver: docker
	I0325 02:23:09.697321  530227 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956
-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.697441  530227 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:23:09.697477  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.697500  530227 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 02:23:09.699359  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.700002  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.794728  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.730700135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:23:09.794990  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.795026  530227 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 02:23:09.797186  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.797321  530227 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:23:09.797348  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:09.797358  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:09.797376  530227 start_flags.go:304] config:
	{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNod
eRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.799343  530227 out.go:176] * Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	I0325 02:23:09.799390  530227 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:23:09.800868  530227 out.go:176] * Pulling base image ...
	I0325 02:23:09.800894  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:09.800929  530227 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:23:09.800950  530227 cache.go:57] Caching tarball of preloaded images
	I0325 02:23:09.800988  530227 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:23:09.801249  530227 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:23:09.801271  530227 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:23:09.801464  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:09.836753  530227 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:23:09.836785  530227 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:23:09.836808  530227 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:23:09.836875  530227 start.go:348] acquiring machines lock for default-k8s-different-port-20220325020956-262786: {Name:mk1740da455fcceda9a6f7400776a3a68790d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:23:09.836992  530227 start.go:352] acquired machines lock for "default-k8s-different-port-20220325020956-262786" in 82.748µs
	I0325 02:23:09.837017  530227 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:23:09.837034  530227 fix.go:55] fixHost starting: 
	I0325 02:23:09.837307  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:09.870534  530227 fix.go:108] recreateIfNeeded on default-k8s-different-port-20220325020956-262786: state=Stopped err=<nil>
	W0325 02:23:09.870565  530227 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:23:09.872836  530227 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220325020956-262786" ...
	I0325 02:23:09.872897  530227 cli_runner.go:133] Run: docker start default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.277624  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:10.313461  530227 kic.go:420] container "default-k8s-different-port-20220325020956-262786" state is running.
	I0325 02:23:10.314041  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.349467  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:10.349684  530227 machine.go:88] provisioning docker machine ...
	I0325 02:23:10.349734  530227 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220325020956-262786"
	I0325 02:23:10.349784  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.385648  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:10.385835  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:10.385854  530227 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220325020956-262786 && echo "default-k8s-different-port-20220325020956-262786" | sudo tee /etc/hostname
	I0325 02:23:10.386524  530227 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33004->127.0.0.1:49594: read: connection reset by peer
	I0325 02:23:13.516245  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220325020956-262786
	
	I0325 02:23:13.516321  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.552077  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:13.552283  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:13.552307  530227 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220325020956-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220325020956-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220325020956-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:23:13.671145  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:23:13.671181  530227 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:23:13.671209  530227 ubuntu.go:177] setting up certificates
	I0325 02:23:13.671220  530227 provision.go:83] configureAuth start
	I0325 02:23:13.671284  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.707509  530227 provision.go:138] copyHostCerts
	I0325 02:23:13.707567  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:23:13.707583  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:23:13.707654  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:23:13.707752  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:23:13.707763  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:23:13.707785  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:23:13.707835  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:23:13.707843  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:23:13.707863  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:23:13.707902  530227 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220325020956-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220325020956-262786]
	I0325 02:23:13.801684  530227 provision.go:172] copyRemoteCerts
	I0325 02:23:13.801761  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:23:13.801796  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.837900  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:13.926796  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:23:13.945040  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:23:13.962557  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0325 02:23:13.980609  530227 provision.go:86] duration metric: configureAuth took 309.376559ms
	I0325 02:23:13.980640  530227 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:23:13.980824  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:13.980838  530227 machine.go:91] provisioned docker machine in 3.631132536s
	I0325 02:23:13.980846  530227 start.go:302] post-start starting for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:23:13.980853  530227 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:23:13.980892  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:23:13.980932  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.016302  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.102734  530227 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:23:14.105732  530227 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:23:14.105760  530227 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:23:14.105786  530227 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:23:14.105795  530227 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:23:14.105810  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:23:14.105871  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:23:14.105966  530227 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:23:14.106069  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:23:14.113216  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:14.131102  530227 start.go:305] post-start completed in 150.235781ms
	I0325 02:23:14.131193  530227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:23:14.131252  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.166319  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.255555  530227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:23:14.259268  530227 fix.go:57] fixHost completed within 4.422236664s
	I0325 02:23:14.259296  530227 start.go:81] releasing machines lock for "default-k8s-different-port-20220325020956-262786", held for 4.422290413s
	I0325 02:23:14.259383  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295568  530227 ssh_runner.go:195] Run: systemctl --version
	I0325 02:23:14.295622  530227 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:23:14.295624  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295670  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.331630  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.332124  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.440710  530227 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:23:14.453593  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:23:14.463531  530227 docker.go:183] disabling docker service ...
	I0325 02:23:14.463587  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:23:14.473649  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:23:14.482885  530227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:23:14.552504  530227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:23:14.625188  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:23:14.634619  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:23:14.648987  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:23:14.662584  530227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:23:14.669661  530227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:23:14.676535  530227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:23:14.749687  530227 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:23:14.824010  530227 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:23:14.824124  530227 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:23:14.828479  530227 start.go:462] Will wait 60s for crictl version
	I0325 02:23:14.828546  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:14.854134  530227 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:23:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:23:25.901131  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:25.924531  530227 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:23:25.924599  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.944738  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.965406  530227 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:23:25.965490  530227 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:23:25.998365  530227 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 02:23:26.001776  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.013555  530227 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:23:26.013655  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:26.013730  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.037965  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.037994  530227 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:23:26.038048  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.062141  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.062166  530227 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:23:26.062213  530227 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:23:26.086309  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:26.086334  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:26.086348  530227 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:23:26.086361  530227 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220325020956-262786 NodeName:default-k8s-different-port-20220325020956-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:23:26.086482  530227 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220325020956-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:23:26.086574  530227 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220325020956-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0325 02:23:26.086621  530227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:23:26.093791  530227 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:23:26.093861  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:23:26.101104  530227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0325 02:23:26.114154  530227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:23:26.127481  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0325 02:23:26.139891  530227 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:23:26.142699  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.151979  530227 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786 for IP: 192.168.49.2
	I0325 02:23:26.152115  530227 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:23:26.152173  530227 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:23:26.152283  530227 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key
	I0325 02:23:26.152367  530227 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2
	I0325 02:23:26.152432  530227 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key
	I0325 02:23:26.152572  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:23:26.152618  530227 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:23:26.152633  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:23:26.152719  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:23:26.152762  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:23:26.152796  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:23:26.152856  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:26.153663  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:23:26.170543  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:23:26.188516  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:23:26.206252  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:23:26.223851  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:23:26.240997  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:23:26.258925  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:23:26.276782  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:23:26.293956  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:23:26.311184  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:23:26.328788  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:23:26.345739  530227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:23:26.358217  530227 ssh_runner.go:195] Run: openssl version
	I0325 02:23:26.363310  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:23:26.371143  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374386  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374446  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.379667  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:23:26.386880  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:23:26.394406  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397558  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397619  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.402576  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:23:26.409580  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:23:26.416799  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419794  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419843  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.424480  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:23:26.431093  530227 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:26.431219  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:23:26.431267  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.455469  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:26.455495  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:26.455501  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:26.455506  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:26.455510  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:26.455515  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:26.455520  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:26.455524  530227 cri.go:87] found id: ""
	I0325 02:23:26.455562  530227 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:23:26.469264  530227 cri.go:114] JSON = null
	W0325 02:23:26.469319  530227 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0325 02:23:26.469383  530227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:23:26.476380  530227 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:23:26.476423  530227 kubeadm.go:601] restartCluster start
	I0325 02:23:26.476467  530227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:23:26.483313  530227 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.484051  530227 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220325020956-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:26.484409  530227 kubeconfig.go:127] "default-k8s-different-port-20220325020956-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:23:26.485050  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:23:26.486481  530227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:23:26.493604  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.493676  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.502078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.702482  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.702567  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.712014  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.902246  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.902320  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.910978  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.103208  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.103289  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.111964  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.303121  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.303213  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.312214  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.502493  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.502598  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.511468  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.702747  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.702890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.711697  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.902931  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.903050  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.912319  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.102538  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.102634  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.111710  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.303008  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.303080  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.312078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.502221  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.502313  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.511095  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.702230  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.702303  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.711103  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.902322  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.902413  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.911515  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.102704  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.102774  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.111434  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.302770  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.302858  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.311706  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.503069  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.503150  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.512690  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.512721  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.512770  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.521635  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.521669  530227 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:23:29.521677  530227 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:23:29.521695  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:23:29.521749  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:29.546890  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:29.546921  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:29.546927  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:29.546932  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:29.546937  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:29.546942  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:29.546946  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:29.546979  530227 cri.go:87] found id: ""
	I0325 02:23:29.546987  530227 cri.go:232] Stopping containers: [f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd]
	I0325 02:23:29.547049  530227 ssh_runner.go:195] Run: which crictl
	I0325 02:23:29.550389  530227 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd
	I0325 02:23:29.575922  530227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:23:29.586795  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:23:29.594440  530227 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 25 02:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2131 Mar 25 02:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:10 /etc/kubernetes/scheduler.conf
	
	I0325 02:23:29.594520  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0325 02:23:29.601472  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0325 02:23:29.608305  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.615261  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.615319  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.622383  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0325 02:23:29.629095  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.629161  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:23:29.636095  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642934  530227 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642998  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:29.687932  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.297307  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.428688  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.476555  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.528341  530227 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:23:30.528397  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.037340  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.536903  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.037557  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.537100  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.037156  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.537124  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.037604  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.536762  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:35.037573  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:35.536890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.037157  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.536733  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.598317  530227 api_server.go:71] duration metric: took 6.069979844s to wait for apiserver process to appear ...
	I0325 02:23:36.598362  530227 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:23:36.598380  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.598866  530227 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0325 02:23:37.099575  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:40.211650  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:23:40.211687  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:23:40.599053  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:40.603812  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:40.603846  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.099269  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.104481  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:41.104517  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.599902  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.604945  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0325 02:23:41.612918  530227 api_server.go:140] control plane version: v1.23.3
	I0325 02:23:41.612944  530227 api_server.go:130] duration metric: took 5.014575703s to wait for apiserver health ...
	I0325 02:23:41.612957  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:41.612965  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:41.615242  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:23:41.615325  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:23:41.619644  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:23:41.619669  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:23:41.633910  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:23:42.356822  530227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:23:42.365307  530227 system_pods.go:59] 9 kube-system pods found
	I0325 02:23:42.365343  530227 system_pods.go:61] "coredns-64897985d-9tgbz" [0d638e01-927d-4431-bf10-393b424f801a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365353  530227 system_pods.go:61] "etcd-default-k8s-different-port-20220325020956-262786" [10e10258-89d5-423b-850f-60ef4b12b83a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:23:42.365361  530227 system_pods.go:61] "kindnet-kt955" [87a42b24-60b7-415b-abc9-e574262093c0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:23:42.365368  530227 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220325020956-262786" [877f6ccd-dcc7-47ff-8574-9b9ec1b05a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:23:42.365376  530227 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220325020956-262786" [cbd16e08-169e-458a-b9c2-bcaa627475cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:23:42.365382  530227 system_pods.go:61] "kube-proxy-7cpjt" [6d1657ba-6fcd-4ee8-8293-b6aa0b7e1fb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:23:42.365387  530227 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220325020956-262786" [7b21b770-272f-4183-a1e4-6cca761e7be8] Running
	I0325 02:23:42.365395  530227 system_pods.go:61] "metrics-server-b955d9d8-h94qn" [f250996f-f9e2-41f2-ba86-6da05d627811] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365401  530227 system_pods.go:61] "storage-provisioner" [1f4e27b1-94bb-49ed-b16e-7237ce00c11a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365409  530227 system_pods.go:74] duration metric: took 8.560724ms to wait for pod list to return data ...
	I0325 02:23:42.365419  530227 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:23:42.368395  530227 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:23:42.368426  530227 node_conditions.go:123] node cpu capacity is 8
	I0325 02:23:42.368439  530227 node_conditions.go:105] duration metric: took 3.013418ms to run NodePressure ...
	I0325 02:23:42.368460  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:42.498603  530227 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503044  530227 kubeadm.go:752] kubelet initialised
	I0325 02:23:42.503087  530227 kubeadm.go:753] duration metric: took 4.396508ms waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503097  530227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:23:42.508446  530227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	I0325 02:23:44.514894  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:46.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:49.014836  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:51.514564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:54.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:56.514871  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:59.014358  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:01.015007  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:03.514691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:05.515135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:08.014925  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:10.514744  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:12.514875  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:15.014427  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:17.514431  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:20.015198  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.514500  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:25.014188  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.015284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:29.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:32.015294  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:34.514331  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:36.514446  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:39.014203  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:41.015081  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:43.515133  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:46.014807  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:48.513848  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:50.514522  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:53.014610  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:55.514800  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:58.014633  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:00.514555  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:02.514600  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:04.514849  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:07.014221  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:09.014509  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:11.014691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:13.014798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:15.514210  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.514263  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:19.515014  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:22.014469  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:24.015322  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:26.514766  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:29.014967  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:31.514230  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:34.014655  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:36.513926  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:38.514284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:40.514364  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:42.514810  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:45.014814  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:47.016025  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:49.514217  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:51.514677  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:54.014149  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:56.014605  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:58.514592  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:01.014803  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:03.015044  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:05.514261  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:07.514811  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:10.014055  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:12.015163  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:14.514268  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:16.514780  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:19.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:21.513928  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:23.514819  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:26.015135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:28.514150  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:30.514433  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:32.514515  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:35.014162  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:37.014582  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:39.015192  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:41.514478  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:43.514682  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:46.014076  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:48.014564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:50.015101  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:52.514545  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:55.014470  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:57.514259  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:59.514425  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:01.514567  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:03.514798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:06.014824  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:08.015115  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:10.513891  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:12.514222  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:14.514644  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:17.014325  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:19.514620  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:22.014551  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:24.014603  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:26.015052  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:28.015506  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:30.514915  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:33.014783  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:35.514512  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:38.014733  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:40.514302  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:42.510816  530227 pod_ready.go:81] duration metric: took 4m0.002335219s waiting for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	E0325 02:27:42.510845  530227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:27:42.510866  530227 pod_ready.go:38] duration metric: took 4m0.007755725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:27:42.510971  530227 kubeadm.go:605] restartCluster took 4m16.034541089s
	W0325 02:27:42.511146  530227 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:27:42.511207  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:27:44.339219  530227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.827981438s)
	I0325 02:27:44.339290  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:27:44.348982  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:27:44.356461  530227 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:27:44.356520  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:27:44.363951  530227 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:27:44.364022  530227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:27:57.283699  530227 out.go:203]   - Generating certificates and keys ...
	I0325 02:27:57.286878  530227 out.go:203]   - Booting up control plane ...
	I0325 02:27:57.289872  530227 out.go:203]   - Configuring RBAC rules ...
	I0325 02:27:57.291696  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:27:57.291719  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:27:57.293919  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:27:57.294011  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:27:57.297810  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:27:57.297833  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:27:57.312402  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:27:58.034457  530227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:27:58.034521  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786 minikube.k8s.io/updated_at=2022_03_25T02_27_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.034522  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.101247  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.118657  530227 ops.go:34] apiserver oom_adj: -16
	I0325 02:27:58.688158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.188855  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.688734  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.188158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.688215  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.188912  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.688969  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.188876  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.688656  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.188835  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.688222  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.188154  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.688514  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.188103  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.688209  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.187993  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.688197  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.188677  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.688113  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.187906  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.688331  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.188315  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.688031  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.751277  530227 kubeadm.go:1020] duration metric: took 11.716819332s to wait for elevateKubeSystemPrivileges.
	I0325 02:28:09.751307  530227 kubeadm.go:393] StartCluster complete in 4m43.320221544s
	I0325 02:28:09.751334  530227 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:09.751483  530227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:28:09.752678  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:10.268555  530227 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220325020956-262786" rescaled to 1
	I0325 02:28:10.268633  530227 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:28:10.268674  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:28:10.270943  530227 out.go:176] * Verifying Kubernetes components...
	I0325 02:28:10.268968  530227 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:28:10.269163  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:28:10.271075  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:28:10.271163  530227 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271166  530227 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271183  530227 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271188  530227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271189  530227 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271192  530227 addons.go:165] addon metrics-server should already be in state true
	I0325 02:28:10.271207  530227 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271217  530227 addons.go:165] addon dashboard should already be in state true
	I0325 02:28:10.271232  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271251  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271164  530227 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271302  530227 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271316  530227 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:28:10.271343  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271538  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271833  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.344040  530227 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.344132  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:28:10.344144  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:28:10.344219  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.346679  530227 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:28:10.346811  530227 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.346826  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:28:10.346882  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.353938  530227 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.355562  530227 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:28:10.355640  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:28:10.355656  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:28:10.355719  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.383518  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.385223  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.391464  530227 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.391493  530227 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:28:10.391524  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.392049  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.400074  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.437891  530227 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.437915  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:28:10.437962  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.471205  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.494523  530227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:28:10.494562  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:28:10.609220  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.609589  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:28:10.609655  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:28:10.609633  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:28:10.609758  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:28:10.700787  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:28:10.700823  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:28:10.701805  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:28:10.701834  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:28:10.800343  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:28:10.800381  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:28:10.808905  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.810521  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:10.810550  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:28:10.899094  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:28:10.899126  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:28:10.905212  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:11.003467  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:28:11.003501  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:28:11.102902  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:28:11.102933  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:28:11.203761  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:28:11.203793  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:28:11.210549  530227 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0325 02:28:11.294868  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:28:11.294905  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:28:11.407000  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.407036  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:28:11.594579  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.993918  530227 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08865456s)
	I0325 02:28:11.994023  530227 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:12.406348  530227 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0325 02:28:12.406384  530227 addons.go:417] enableAddons completed in 2.137426118s
	I0325 02:28:12.501452  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:15.001678  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:17.002236  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:19.501942  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:22.002181  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:24.002264  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:26.501667  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:29.002078  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:31.501585  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:34.001805  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:36.002041  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:38.002339  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:40.501697  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:43.001641  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:45.501691  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:48.001669  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:50.001733  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:52.001838  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:54.002159  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:56.501779  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:59.001558  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:01.002011  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:03.501394  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:06.001566  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:08.002174  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:10.501618  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:12.502189  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:15.005102  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:17.501055  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:19.501433  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:22.001489  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:24.001553  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:26.001738  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:28.501727  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:31.001897  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:33.002070  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:35.501917  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:38.001428  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:40.001672  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:42.002147  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:44.501313  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:47.001730  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:49.002193  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:51.501480  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:53.502170  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:56.001749  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:58.501848  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:01.001969  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:03.002165  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:05.501492  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:07.501983  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:10.001162  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:12.001919  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:14.001948  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:16.501197  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:18.502117  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:20.502225  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:23.002141  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:25.502083  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:28.001876  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:30.002027  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:32.503056  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:35.001423  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:37.501534  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:40.002190  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:42.502274  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:45.001432  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:47.001999  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:49.501359  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:52.001784  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:54.501845  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:56.502160  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:59.001872  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:01.501682  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:04.002218  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:06.002445  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:08.501995  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:10.502365  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:13.001863  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:15.002027  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:17.003337  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:19.501499  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:21.501688  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:24.001679  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:26.502090  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:29.001645  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:31.501716  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:33.501846  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:36.001006  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:38.002402  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:40.501392  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:42.502189  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:45.002082  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:47.500971  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:49.502314  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:52.001990  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:54.501597  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:57.001789  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:59.002549  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:01.003204  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:03.501518  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:05.502059  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:07.502226  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:10.001697  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:10.504266  530227 node_ready.go:38] duration metric: took 4m0.009695209s waiting for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:32:10.507459  530227 out.go:176] 
	W0325 02:32:10.507629  530227 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:32:10.507649  530227 out.go:241] * 
	* 
	W0325 02:32:10.508449  530227 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:32:10.510621  530227 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220325020956-262786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20220325020956-262786
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20220325020956-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4",
	        "Created": "2022-03-25T02:10:07.830065737Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 530511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:23:10.269638118Z",
	            "FinishedAt": "2022-03-25T02:23:08.930200628Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hosts",
	        "LogPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4-json.log",
	        "Name": "/default-k8s-different-port-20220325020956-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220325020956-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220325020956-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220325020956-262786",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220325020956-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220325020956-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e63ced8335d7e5f521c6a6ba8d6908625d99a772df361d90fbcab337a78b772",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49594"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49593"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49590"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49592"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49591"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6e63ced8335d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220325020956-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e271f66fa8d",
	                        "default-k8s-different-port-20220325020956-262786"
	                    ],
	                    "NetworkID": "c5c0224540019d877be5e36bfc556dc0a2d83980f6e5b563be26e38eaad27a38",
	                    "EndpointID": "b15913a0fa2c6356af7d5afde8a1a2d1e35583bc3ab4949729b25ba92bed5481",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220325020956-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | embed-certs-20220325020743-262786                | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:14:54 UTC |
	|         | embed-certs-20220325020743-262786                          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:23 UTC | Fri, 25 Mar 2022 02:16:24 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:24 UTC | Fri, 25 Mar 2022 02:16:25 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:25 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:35 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:46 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:47 UTC | Fri, 25 Mar 2022 02:16:48 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:48 UTC | Fri, 25 Mar 2022 02:16:51 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:51 UTC | Fri, 25 Mar 2022 02:16:52 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:19:35 UTC | Fri, 25 Mar 2022 02:19:36 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:55 UTC | Fri, 25 Mar 2022 02:22:56 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:57 UTC | Fri, 25 Mar 2022 02:22:58 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:22:59 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:23:09 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:25:38 UTC | Fri, 25 Mar 2022 02:25:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:28:37 UTC | Fri, 25 Mar 2022 02:28:38 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:28:39 UTC | Fri, 25 Mar 2022 02:28:41 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:23:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:23:09.537576  530227 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:23:09.537696  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537706  530227 out.go:310] Setting ErrFile to fd 2...
	I0325 02:23:09.537710  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537815  530227 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:23:09.538048  530227 out.go:304] Setting JSON to false
	I0325 02:23:09.539384  530227 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":18062,"bootTime":1648156928,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:23:09.539464  530227 start.go:125] virtualization: kvm guest
	I0325 02:23:09.542093  530227 out.go:176] * [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:23:09.543709  530227 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:23:09.542258  530227 notify.go:193] Checking for updates...
	I0325 02:23:09.545591  530227 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:23:09.547307  530227 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:09.548939  530227 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:23:09.550462  530227 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:23:09.550916  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:09.551395  530227 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:23:09.596032  530227 docker.go:136] docker version: linux-20.10.14
	I0325 02:23:09.596139  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.694688  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.627733687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:23:09.694822  530227 docker.go:253] overlay module found
	I0325 02:23:09.697284  530227 out.go:176] * Using the docker driver based on existing profile
	I0325 02:23:09.697314  530227 start.go:284] selected driver: docker
	I0325 02:23:09.697321  530227 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956
-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.697441  530227 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:23:09.697477  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.697500  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.699359  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.700002  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.794728  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.730700135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:23:09.794990  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.795026  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.797186  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.797321  530227 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:23:09.797348  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:09.797358  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:09.797376  530227 start_flags.go:304] config:
	{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNod
eRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.799343  530227 out.go:176] * Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	I0325 02:23:09.799390  530227 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:23:09.800868  530227 out.go:176] * Pulling base image ...
	I0325 02:23:09.800894  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:09.800929  530227 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:23:09.800950  530227 cache.go:57] Caching tarball of preloaded images
	I0325 02:23:09.800988  530227 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:23:09.801249  530227 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:23:09.801271  530227 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:23:09.801464  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:09.836753  530227 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:23:09.836785  530227 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:23:09.836808  530227 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:23:09.836875  530227 start.go:348] acquiring machines lock for default-k8s-different-port-20220325020956-262786: {Name:mk1740da455fcceda9a6f7400776a3a68790d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:23:09.836992  530227 start.go:352] acquired machines lock for "default-k8s-different-port-20220325020956-262786" in 82.748µs
	I0325 02:23:09.837017  530227 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:23:09.837034  530227 fix.go:55] fixHost starting: 
	I0325 02:23:09.837307  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:09.870534  530227 fix.go:108] recreateIfNeeded on default-k8s-different-port-20220325020956-262786: state=Stopped err=<nil>
	W0325 02:23:09.870565  530227 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:23:06.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:08.779908  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:09.872836  530227 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220325020956-262786" ...
	I0325 02:23:09.872897  530227 cli_runner.go:133] Run: docker start default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.277624  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:10.313461  530227 kic.go:420] container "default-k8s-different-port-20220325020956-262786" state is running.
	I0325 02:23:10.314041  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.349467  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:10.349684  530227 machine.go:88] provisioning docker machine ...
	I0325 02:23:10.349734  530227 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220325020956-262786"
	I0325 02:23:10.349784  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.385648  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:10.385835  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:10.385854  530227 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220325020956-262786 && echo "default-k8s-different-port-20220325020956-262786" | sudo tee /etc/hostname
	I0325 02:23:10.386524  530227 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33004->127.0.0.1:49594: read: connection reset by peer
	I0325 02:23:13.516245  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220325020956-262786
	
	I0325 02:23:13.516321  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.552077  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:13.552283  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:13.552307  530227 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220325020956-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220325020956-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220325020956-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:23:13.671145  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:23:13.671181  530227 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:23:13.671209  530227 ubuntu.go:177] setting up certificates
	I0325 02:23:13.671220  530227 provision.go:83] configureAuth start
	I0325 02:23:13.671284  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.707509  530227 provision.go:138] copyHostCerts
	I0325 02:23:13.707567  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:23:13.707583  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:23:13.707654  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:23:13.707752  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:23:13.707763  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:23:13.707785  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:23:13.707835  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:23:13.707843  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:23:13.707863  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:23:13.707902  530227 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220325020956-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220325020956-262786]
	I0325 02:23:13.801684  530227 provision.go:172] copyRemoteCerts
	I0325 02:23:13.801761  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:23:13.801796  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.837900  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:13.926796  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:23:13.945040  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:23:13.962557  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0325 02:23:13.980609  530227 provision.go:86] duration metric: configureAuth took 309.376559ms
	I0325 02:23:13.980640  530227 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:23:13.980824  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:13.980838  530227 machine.go:91] provisioned docker machine in 3.631132536s
	I0325 02:23:13.980846  530227 start.go:302] post-start starting for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:23:13.980853  530227 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:23:13.980892  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:23:13.980932  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.016302  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.102734  530227 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:23:14.105732  530227 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:23:14.105760  530227 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:23:14.105786  530227 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:23:14.105795  530227 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:23:14.105810  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:23:14.105871  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:23:14.105966  530227 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:23:14.106069  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:23:14.113216  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:14.131102  530227 start.go:305] post-start completed in 150.235781ms
	I0325 02:23:14.131193  530227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:23:14.131252  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.166319  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.255555  530227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:23:14.259268  530227 fix.go:57] fixHost completed within 4.422236664s
	I0325 02:23:14.259296  530227 start.go:81] releasing machines lock for "default-k8s-different-port-20220325020956-262786", held for 4.422290413s
	I0325 02:23:14.259383  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295568  530227 ssh_runner.go:195] Run: systemctl --version
	I0325 02:23:14.295622  530227 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:23:14.295624  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295670  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.331630  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.332124  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.440710  530227 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:23:14.453593  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:23:14.463531  530227 docker.go:183] disabling docker service ...
	I0325 02:23:14.463587  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:23:14.473649  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:23:14.482885  530227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:23:10.781510  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:13.279624  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:15.280218  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:14.552504  530227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:23:14.625188  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:23:14.634619  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:23:14.648987  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:23:14.662584  530227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:23:14.669661  530227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:23:14.676535  530227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:23:14.749687  530227 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:23:14.824010  530227 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:23:14.824124  530227 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:23:14.828479  530227 start.go:462] Will wait 60s for crictl version
	I0325 02:23:14.828546  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:14.854134  530227 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:23:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:23:17.779273  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:19.780082  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:21.780204  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:24.279380  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:25.901131  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:25.924531  530227 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:23:25.924599  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.944738  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.965406  530227 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:23:25.965490  530227 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:23:25.998365  530227 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 02:23:26.001776  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.013555  530227 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:23:26.013655  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:26.013730  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.037965  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.037994  530227 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:23:26.038048  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.062141  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.062166  530227 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:23:26.062213  530227 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:23:26.086309  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:26.086334  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:26.086348  530227 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:23:26.086361  530227 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220325020956-262786 NodeName:default-k8s-different-port-20220325020956-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:23:26.086482  530227 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220325020956-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:23:26.086574  530227 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220325020956-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0325 02:23:26.086621  530227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:23:26.093791  530227 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:23:26.093861  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:23:26.101104  530227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0325 02:23:26.114154  530227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:23:26.127481  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0325 02:23:26.139891  530227 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:23:26.142699  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.151979  530227 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786 for IP: 192.168.49.2
	I0325 02:23:26.152115  530227 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:23:26.152173  530227 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:23:26.152283  530227 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key
	I0325 02:23:26.152367  530227 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2
	I0325 02:23:26.152432  530227 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key
	I0325 02:23:26.152572  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:23:26.152618  530227 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:23:26.152633  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:23:26.152719  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:23:26.152762  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:23:26.152796  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:23:26.152856  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:26.153663  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:23:26.170543  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:23:26.188516  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:23:26.206252  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:23:26.223851  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:23:26.240997  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:23:26.258925  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:23:26.276782  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:23:26.293956  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:23:26.311184  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:23:26.328788  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:23:26.345739  530227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:23:26.358217  530227 ssh_runner.go:195] Run: openssl version
	I0325 02:23:26.363310  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:23:26.371143  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374386  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374446  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.379667  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:23:26.386880  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:23:26.394406  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397558  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397619  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.402576  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:23:26.409580  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:23:26.416799  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419794  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419843  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.424480  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:23:26.431093  530227 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:26.431219  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:23:26.431267  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.455469  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:26.455495  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:26.455501  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:26.455506  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:26.455510  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:26.455515  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:26.455520  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:26.455524  530227 cri.go:87] found id: ""
	I0325 02:23:26.455562  530227 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:23:26.469264  530227 cri.go:114] JSON = null
	W0325 02:23:26.469319  530227 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0325 02:23:26.469383  530227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:23:26.476380  530227 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:23:26.476423  530227 kubeadm.go:601] restartCluster start
	I0325 02:23:26.476467  530227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:23:26.483313  530227 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.484051  530227 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220325020956-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:26.484409  530227 kubeconfig.go:127] "default-k8s-different-port-20220325020956-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:23:26.485050  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:23:26.486481  530227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:23:26.493604  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.493676  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.502078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.702482  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.702567  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.712014  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.902246  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.902320  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.910978  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.103208  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.103289  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.111964  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.303121  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.303213  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.312214  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.502493  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.502598  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.511468  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.702747  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.702890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.711697  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.902931  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.903050  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.912319  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.102538  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.102634  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.111710  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.303008  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.303080  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.312078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.502221  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.502313  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.511095  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.702230  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.702303  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.711103  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.902322  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.902413  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.911515  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.102704  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.102774  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.111434  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.302770  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.302858  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.311706  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.503069  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.503150  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.512690  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.512721  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.512770  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.521635  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.521669  530227 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:23:29.521677  530227 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:23:29.521695  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:23:29.521749  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.279462  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:28.279531  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:30.280060  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:29.546890  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:29.546921  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:29.546927  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:29.546932  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:29.546937  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:29.546942  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:29.546946  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:29.546979  530227 cri.go:87] found id: ""
	I0325 02:23:29.546987  530227 cri.go:232] Stopping containers: [f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd]
	I0325 02:23:29.547049  530227 ssh_runner.go:195] Run: which crictl
	I0325 02:23:29.550389  530227 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd
	I0325 02:23:29.575922  530227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:23:29.586795  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:23:29.594440  530227 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 25 02:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2131 Mar 25 02:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:10 /etc/kubernetes/scheduler.conf
	
	I0325 02:23:29.594520  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0325 02:23:29.601472  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0325 02:23:29.608305  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.615261  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.615319  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.622383  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0325 02:23:29.629095  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.629161  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:23:29.636095  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642934  530227 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642998  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:29.687932  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.297307  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.428688  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.476555  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.528341  530227 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:23:30.528397  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.037340  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.536903  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.037557  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.537100  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.037156  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.537124  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.037604  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.536762  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.280264  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:34.779413  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:35.037573  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:35.536890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.037157  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.536733  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.598317  530227 api_server.go:71] duration metric: took 6.069979844s to wait for apiserver process to appear ...
	I0325 02:23:36.598362  530227 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:23:36.598380  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.598866  530227 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0325 02:23:37.099575  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.779484  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:38.779979  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:40.211650  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:23:40.211687  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:23:40.599053  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:40.603812  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:40.603846  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.099269  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.104481  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:41.104517  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.599902  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.604945  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0325 02:23:41.612918  530227 api_server.go:140] control plane version: v1.23.3
	I0325 02:23:41.612944  530227 api_server.go:130] duration metric: took 5.014575703s to wait for apiserver health ...
	I0325 02:23:41.612957  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:41.612965  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:41.615242  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:23:41.615325  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:23:41.619644  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:23:41.619669  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:23:41.633910  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:23:42.356822  530227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:23:42.365307  530227 system_pods.go:59] 9 kube-system pods found
	I0325 02:23:42.365343  530227 system_pods.go:61] "coredns-64897985d-9tgbz" [0d638e01-927d-4431-bf10-393b424f801a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365353  530227 system_pods.go:61] "etcd-default-k8s-different-port-20220325020956-262786" [10e10258-89d5-423b-850f-60ef4b12b83a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:23:42.365361  530227 system_pods.go:61] "kindnet-kt955" [87a42b24-60b7-415b-abc9-e574262093c0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:23:42.365368  530227 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220325020956-262786" [877f6ccd-dcc7-47ff-8574-9b9ec1b05a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:23:42.365376  530227 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220325020956-262786" [cbd16e08-169e-458a-b9c2-bcaa627475cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:23:42.365382  530227 system_pods.go:61] "kube-proxy-7cpjt" [6d1657ba-6fcd-4ee8-8293-b6aa0b7e1fb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:23:42.365387  530227 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220325020956-262786" [7b21b770-272f-4183-a1e4-6cca761e7be8] Running
	I0325 02:23:42.365395  530227 system_pods.go:61] "metrics-server-b955d9d8-h94qn" [f250996f-f9e2-41f2-ba86-6da05d627811] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365401  530227 system_pods.go:61] "storage-provisioner" [1f4e27b1-94bb-49ed-b16e-7237ce00c11a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365409  530227 system_pods.go:74] duration metric: took 8.560724ms to wait for pod list to return data ...
	I0325 02:23:42.365419  530227 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:23:42.368395  530227 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:23:42.368426  530227 node_conditions.go:123] node cpu capacity is 8
	I0325 02:23:42.368439  530227 node_conditions.go:105] duration metric: took 3.013418ms to run NodePressure ...
	I0325 02:23:42.368460  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:42.498603  530227 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503044  530227 kubeadm.go:752] kubelet initialised
	I0325 02:23:42.503087  530227 kubeadm.go:753] duration metric: took 4.396508ms waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503097  530227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:23:42.508446  530227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	I0325 02:23:44.514894  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:41.279719  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:43.779807  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:46.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:49.014836  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:46.279223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:48.279265  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:50.280221  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:51.514564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:54.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:52.780223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:55.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:56.514871  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:59.014358  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:57.280104  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:59.779435  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:01.015007  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:03.514691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:01.779945  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:03.780076  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:05.515135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:08.014925  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:06.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:08.280022  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:10.514744  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:12.514875  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:10.779769  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:13.279988  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:15.014427  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:17.514431  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:15.779111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:17.779860  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.282496  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.015198  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.514500  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.779392  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:24.779583  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:25.014188  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.015284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:29.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.280129  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:29.779139  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:32.015294  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:34.514331  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:31.779438  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:34.279292  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:36.514446  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:39.014203  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:36.280233  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:38.779288  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:41.015081  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:43.515133  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:40.779876  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:43.279836  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:45.280111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:46.014807  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:48.513848  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:47.779037  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:49.779225  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:50.514522  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:53.014610  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:52.279107  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:54.279992  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:55.514800  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:58.014633  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:56.280212  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:58.779953  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:00.514555  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:02.514600  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:04.514849  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:00.780066  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:03.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:05.280246  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:07.014221  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:09.014509  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:07.780397  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:10.279278  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:11.014691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:13.014798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:12.779414  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.279560  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.514210  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.514263  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:19.515014  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.779664  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:19.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:22.014469  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:24.015322  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:22.279477  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:24.779885  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:26.514766  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:29.014967  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:27.279254  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:29.280083  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:31.514230  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:34.014655  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:31.779951  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:34.279813  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:36.279928  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282274  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282298  519649 node_ready.go:38] duration metric: took 4m0.009544217s waiting for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:25:38.285018  519649 out.go:176] 
	W0325 02:25:38.285266  519649 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:25:38.285284  519649 out.go:241] * 
	W0325 02:25:38.286304  519649 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:25:36.513926  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:38.514284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:40.514364  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:42.514810  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:45.014814  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:47.016025  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:49.514217  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:51.514677  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:54.014149  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:56.014605  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:58.514592  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:01.014803  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:03.015044  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:05.514261  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:07.514811  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:10.014055  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:12.015163  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:14.514268  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:16.514780  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:19.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:21.513928  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:23.514819  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:26.015135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:28.514150  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:30.514433  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:32.514515  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:35.014162  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:37.014582  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:39.015192  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:41.514478  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:43.514682  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:46.014076  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:48.014564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:50.015101  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:52.514545  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:55.014470  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:57.514259  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:59.514425  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:01.514567  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:03.514798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:06.014824  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:08.015115  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:10.513891  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:12.514222  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:14.514644  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:17.014325  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:19.514620  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:22.014551  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:24.014603  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:26.015052  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:28.015506  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:30.514915  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:33.014783  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:35.514512  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:38.014733  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:40.514302  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:42.510816  530227 pod_ready.go:81] duration metric: took 4m0.002335219s waiting for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	E0325 02:27:42.510845  530227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:27:42.510866  530227 pod_ready.go:38] duration metric: took 4m0.007755725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:27:42.510971  530227 kubeadm.go:605] restartCluster took 4m16.034541089s
	W0325 02:27:42.511146  530227 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:27:42.511207  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:27:44.339219  530227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.827981438s)
	I0325 02:27:44.339290  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:27:44.348982  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:27:44.356461  530227 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:27:44.356520  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:27:44.363951  530227 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:27:44.364022  530227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:27:57.283699  530227 out.go:203]   - Generating certificates and keys ...
	I0325 02:27:57.286878  530227 out.go:203]   - Booting up control plane ...
	I0325 02:27:57.289872  530227 out.go:203]   - Configuring RBAC rules ...
	I0325 02:27:57.291696  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:27:57.291719  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:27:57.293919  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:27:57.294011  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:27:57.297810  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:27:57.297833  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:27:57.312402  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:27:58.034457  530227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:27:58.034521  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786 minikube.k8s.io/updated_at=2022_03_25T02_27_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.034522  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.101247  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.118657  530227 ops.go:34] apiserver oom_adj: -16
	I0325 02:27:58.688158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.188855  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.688734  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.188158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.688215  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.188912  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.688969  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.188876  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.688656  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.188835  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.688222  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.188154  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.688514  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.188103  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.688209  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.187993  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.688197  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.188677  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.688113  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.187906  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.688331  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.188315  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.688031  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.751277  530227 kubeadm.go:1020] duration metric: took 11.716819332s to wait for elevateKubeSystemPrivileges.
	I0325 02:28:09.751307  530227 kubeadm.go:393] StartCluster complete in 4m43.320221544s
	I0325 02:28:09.751334  530227 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:09.751483  530227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:28:09.752678  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:10.268555  530227 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220325020956-262786" rescaled to 1
	I0325 02:28:10.268633  530227 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:28:10.268674  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:28:10.270943  530227 out.go:176] * Verifying Kubernetes components...
	I0325 02:28:10.268968  530227 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:28:10.269163  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:28:10.271075  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:28:10.271163  530227 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271166  530227 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271183  530227 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271188  530227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271189  530227 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271192  530227 addons.go:165] addon metrics-server should already be in state true
	I0325 02:28:10.271207  530227 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271217  530227 addons.go:165] addon dashboard should already be in state true
	I0325 02:28:10.271232  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271251  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271164  530227 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271302  530227 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271316  530227 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:28:10.271343  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271538  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271833  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.344040  530227 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.344132  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:28:10.344144  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:28:10.344219  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.346679  530227 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:28:10.346811  530227 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.346826  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:28:10.346882  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.353938  530227 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.355562  530227 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:28:10.355640  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:28:10.355656  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:28:10.355719  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.383518  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.385223  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.391464  530227 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.391493  530227 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:28:10.391524  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.392049  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.400074  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.437891  530227 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.437915  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:28:10.437962  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.471205  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.494523  530227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:28:10.494562  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:28:10.609220  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.609589  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:28:10.609655  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:28:10.609633  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:28:10.609758  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:28:10.700787  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:28:10.700823  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:28:10.701805  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:28:10.701834  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:28:10.800343  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:28:10.800381  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:28:10.808905  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.810521  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:10.810550  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:28:10.899094  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:28:10.899126  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:28:10.905212  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:11.003467  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:28:11.003501  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:28:11.102902  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:28:11.102933  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:28:11.203761  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:28:11.203793  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:28:11.210549  530227 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0325 02:28:11.294868  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:28:11.294905  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:28:11.407000  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.407036  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:28:11.594579  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.993918  530227 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08865456s)
	I0325 02:28:11.994023  530227 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:12.406348  530227 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0325 02:28:12.406384  530227 addons.go:417] enableAddons completed in 2.137426118s
	I0325 02:28:12.501452  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:15.001678  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:17.002236  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:19.501942  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:22.002181  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:24.002264  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:26.501667  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:29.002078  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:31.501585  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:34.001805  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:36.002041  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:38.002339  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:40.501697  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:43.001641  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:45.501691  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:48.001669  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:50.001733  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:52.001838  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:54.002159  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:56.501779  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:59.001558  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:01.002011  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:03.501394  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:06.001566  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:08.002174  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:10.501618  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:12.502189  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:15.005102  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:17.501055  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:19.501433  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:22.001489  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:24.001553  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:26.001738  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:28.501727  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:31.001897  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:33.002070  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:35.501917  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:38.001428  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:40.001672  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:42.002147  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:44.501313  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:47.001730  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:49.002193  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:51.501480  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:53.502170  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:56.001749  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:58.501848  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:01.001969  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:03.002165  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:05.501492  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:07.501983  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:10.001162  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:12.001919  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:14.001948  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:16.501197  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:18.502117  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:20.502225  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:23.002141  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:25.502083  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:28.001876  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:30.002027  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:32.503056  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:35.001423  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:37.501534  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:40.002190  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:42.502274  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:45.001432  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:47.001999  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:49.501359  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:52.001784  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:54.501845  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:56.502160  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:59.001872  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:01.501682  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:04.002218  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:06.002445  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:08.501995  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:10.502365  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:13.001863  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:15.002027  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:17.003337  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:19.501499  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:21.501688  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:24.001679  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:26.502090  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:29.001645  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:31.501716  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:33.501846  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:36.001006  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:38.002402  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:40.501392  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:42.502189  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:45.002082  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:47.500971  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:49.502314  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:52.001990  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:54.501597  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:57.001789  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:59.002549  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:01.003204  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:03.501518  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:05.502059  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:07.502226  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:10.001697  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:10.504266  530227 node_ready.go:38] duration metric: took 4m0.009695209s waiting for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:32:10.507459  530227 out.go:176] 
	W0325 02:32:10.507629  530227 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:32:10.507649  530227 out.go:241] * 
	W0325 02:32:10.508449  530227 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	202a86cfbfa5d       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   a65827bfd7c9e
	6832a4d07d8b0       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   a65827bfd7c9e
	dd322dba64c8a       9b7cc99821098       4 minutes ago        Running             kube-proxy                0                   0ba6261aad033
	7be9a97449cc3       b07520cd7ab76       4 minutes ago        Running             kube-controller-manager   2                   a6aab84cb155b
	7e2801b636d95       f40be0088a83e       4 minutes ago        Running             kube-apiserver            2                   1df88ac29bb94
	1c45766f9b001       99a3486be4f28       4 minutes ago        Running             kube-scheduler            2                   779cc1c8f883d
	1918920313743       25f8c7f3da61c       4 minutes ago        Running             etcd                      2                   c959e762476d0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:23:10 UTC, end at Fri 2022-03-25 02:32:11 UTC. --
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.386460481Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-rfd9g,Uid:7d62a70e-ed01-4578-b1f2-804bf5014d6f,Namespace:kube-system,Attempt:0,}"
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.484686693Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ba6261aad033bce6ea673f5e0f31c2ab5c2445f0acb562a67f9559d5999ce5d pid=3291
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.493265678Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da pid=3311
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.701293645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfd9g,Uid:7d62a70e-ed01-4578-b1f2-804bf5014d6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ba6261aad033bce6ea673f5e0f31c2ab5c2445f0acb562a67f9559d5999ce5d\""
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.708938426Z" level=info msg="CreateContainer within sandbox \"0ba6261aad033bce6ea673f5e0f31c2ab5c2445f0acb562a67f9559d5999ce5d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.804426984Z" level=info msg="CreateContainer within sandbox \"0ba6261aad033bce6ea673f5e0f31c2ab5c2445f0acb562a67f9559d5999ce5d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd322dba64c8aa7660220ec9853be1f48708b62522e445198ad60efe62d029ff\""
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.805698034Z" level=info msg="StartContainer for \"dd322dba64c8aa7660220ec9853be1f48708b62522e445198ad60efe62d029ff\""
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.899834454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-dgbq7,Uid:24f771e6-9a01-4bd1-9da1-2846eb0f9852,Namespace:kube-system,Attempt:0,} returns sandbox id \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\""
	Mar 25 02:28:10 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:10.905594599Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Mar 25 02:28:11 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:11.006913080Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"6832a4d07d8b063f82a511540f31f2ca94f55751848ab44261defdbea45d4910\""
	Mar 25 02:28:11 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:11.007809095Z" level=info msg="StartContainer for \"6832a4d07d8b063f82a511540f31f2ca94f55751848ab44261defdbea45d4910\""
	Mar 25 02:28:11 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:11.289032115Z" level=info msg="StartContainer for \"dd322dba64c8aa7660220ec9853be1f48708b62522e445198ad60efe62d029ff\" returns successfully"
	Mar 25 02:28:11 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:28:11.504494376Z" level=info msg="StartContainer for \"6832a4d07d8b063f82a511540f31f2ca94f55751848ab44261defdbea45d4910\" returns successfully"
	Mar 25 02:29:02 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:29:02.166464612Z" level=error msg="ContainerStatus for \"c3de5d21a9245b1a44375e4158eaff9516fb29e616a8defd38935da86e8b6a45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3de5d21a9245b1a44375e4158eaff9516fb29e616a8defd38935da86e8b6a45\": not found"
	Mar 25 02:29:02 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:29:02.167063662Z" level=error msg="ContainerStatus for \"e165b4a744269f6b4a854b21bcc40cda676d2ff96ab4ce3ebd18e4d0a1f5d2f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e165b4a744269f6b4a854b21bcc40cda676d2ff96ab4ce3ebd18e4d0a1f5d2f8\": not found"
	Mar 25 02:29:02 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:29:02.167583120Z" level=error msg="ContainerStatus for \"4321fb7996962c7bf5ed294d61bf2194569a856ca677f06eff77835a6bf3767b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4321fb7996962c7bf5ed294d61bf2194569a856ca677f06eff77835a6bf3767b\": not found"
	Mar 25 02:29:02 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:29:02.168050997Z" level=error msg="ContainerStatus for \"dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b\": not found"
	Mar 25 02:30:51 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:30:51.825079516Z" level=info msg="shim disconnected" id=6832a4d07d8b063f82a511540f31f2ca94f55751848ab44261defdbea45d4910
	Mar 25 02:30:51 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:30:51.825154355Z" level=warning msg="cleaning up after shim disconnected" id=6832a4d07d8b063f82a511540f31f2ca94f55751848ab44261defdbea45d4910 namespace=k8s.io
	Mar 25 02:30:51 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:30:51.825175074Z" level=info msg="cleaning up dead shim"
	Mar 25 02:30:51 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:30:51.836800816Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:30:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3799\n"
	Mar 25 02:30:52 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:30:52.569045092Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Mar 25 02:30:52 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:30:52.582499959Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"202a86cfbfa5d0e2d8a73cb579305615cb704a24e591bc20e2b7ed4742958cef\""
	Mar 25 02:30:52 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:30:52.583105860Z" level=info msg="StartContainer for \"202a86cfbfa5d0e2d8a73cb579305615cb704a24e591bc20e2b7ed4742958cef\""
	Mar 25 02:30:52 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:30:52.789031745Z" level=info msg="StartContainer for \"202a86cfbfa5d0e2d8a73cb579305615cb704a24e591bc20e2b7ed4742958cef\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220325020956-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220325020956-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_27_58_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:27:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220325020956-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:32:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:28:09 +0000   Fri, 25 Mar 2022 02:27:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:28:09 +0000   Fri, 25 Mar 2022 02:27:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:28:09 +0000   Fri, 25 Mar 2022 02:27:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:28:09 +0000   Fri, 25 Mar 2022 02:27:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220325020956-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                3d34c106-4e48-46f4-9bcf-ea4602321294
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.3
	  Kube-Proxy Version:         v1.23.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220325020956-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-dgbq7                                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m1s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220325020956-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220325020956-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-rfd9g                                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220325020956-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 4m    kube-proxy  
	  Normal  Starting                 4m9s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s  kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [1918920313743155ab9ccfc3912db66df70045dd4f71848f6def6ce3db51955e] <==
	* {"level":"info","ts":"2022-03-25T02:27:51.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-03-25T02:27:51.617Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220325020956-262786 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:27:52.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-03-25T02:27:52.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  02:32:11 up  5:10,  0 users,  load average: 0.17, 0.23, 0.65
	Linux default-k8s-different-port-20220325020956-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [7e2801b636d950a93f789ebef4b0df032b145af6b25620f294aeeeb2684410b5] <==
	* I0325 02:27:55.830600       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0325 02:27:56.410735       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0325 02:27:57.092093       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0325 02:27:57.101653       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0325 02:27:57.112694       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0325 02:28:02.201477       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0325 02:28:10.016976       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:28:10.016976       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0325 02:28:10.115948       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0325 02:28:11.690215       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0325 02:28:11.987452       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.111.183.101]
	I0325 02:28:12.385563       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.99.170.33]
	I0325 02:28:12.397638       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.103.108.8]
	W0325 02:28:12.800499       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:28:12.800582       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:28:12.800591       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:29:12.801585       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:29:12.801650       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:29:12.801657       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:31:12.802601       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:31:12.802687       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:31:12.802694       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7be9a97449cc36584579a06b845d0efa759c0b852c6a7e736172c2131a3e29ce] <==
	* I0325 02:28:12.185990       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0325 02:28:12.188382       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:28:12.188395       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0325 02:28:12.190072       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:28:12.190165       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0325 02:28:12.193755       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0325 02:28:12.193780       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0325 02:28:12.202078       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-ghrlb"
	I0325 02:28:12.216352       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-hm5lc"
	E0325 02:28:39.707228       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:28:40.122160       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:29:09.726244       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:29:10.138466       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:29:39.744334       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:29:40.154270       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:30:09.761669       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:30:10.168981       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:30:39.778993       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:30:40.185522       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:31:09.795385       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:31:10.201963       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:31:39.810828       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:31:40.217925       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:32:09.826499       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:32:10.233445       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [dd322dba64c8aa7660220ec9853be1f48708b62522e445198ad60efe62d029ff] <==
	* I0325 02:28:11.496212       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0325 02:28:11.496289       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0325 02:28:11.496332       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:28:11.686587       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:28:11.686649       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:28:11.686659       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:28:11.686690       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:28:11.687288       1 server.go:656] "Version info" version="v1.23.3"
	I0325 02:28:11.687967       1 config.go:317] "Starting service config controller"
	I0325 02:28:11.688037       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:28:11.688010       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:28:11.688180       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:28:11.788226       1 shared_informer.go:247] Caches are synced for service config 
	I0325 02:28:11.788263       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [1c45766f9b001a935c9e0207523f5e109bc199d1764e28b1815a1ffda87f035c] <==
	* W0325 02:27:54.394360       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:27:54.394430       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:27:54.394445       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:27:54.394431       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:27:54.394235       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:27:54.394542       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:27:54.394596       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:54.394627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:27:54.394685       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:27:54.394717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0325 02:27:54.394829       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:27:54.394898       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:27:54.397264       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0325 02:27:54.397299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:27:55.258388       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:55.258426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0325 02:27:55.258881       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:55.258928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0325 02:27:55.289679       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:27:55.289719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:27:55.455646       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:55.455682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:27:55.474784       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:55.474827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0325 02:27:55.790840       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:23:10 UTC, end at Fri 2022-03-25 02:32:11 UTC. --
	Mar 25 02:30:12 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:12.448520    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:17 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:17.449768    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:22 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:22.450697    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:27 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:27.452069    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:32 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:32.453343    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:37 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:37.454556    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:42 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:42.455991    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:47 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:47.457045    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:52 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:52.458345    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:30:52 default-k8s-different-port-20220325020956-262786 kubelet[2893]: I0325 02:30:52.566795    2893 scope.go:110] "RemoveContainer" containerID="6832a4d07d8b063f82a511540f31f2ca94f55751848ab44261defdbea45d4910"
	Mar 25 02:30:57 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:30:57.459939    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:02 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:02.461650    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:07 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:07.463363    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:12 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:12.464620    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:17 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:17.466215    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:22 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:22.467486    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:27 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:27.468550    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:32 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:32.469947    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:37 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:37.471197    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:42 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:42.472823    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:47 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:47.473876    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:52 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:52.474991    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:31:57 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:31:57.476371    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:32:02 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:32:02.477631    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:32:07 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:32:07.478704    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-hbhkk metrics-server-b955d9d8-689fn storage-provisioner dashboard-metrics-scraper-56974995fc-hm5lc kubernetes-dashboard-ccd587f44-ghrlb
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-hbhkk metrics-server-b955d9d8-689fn storage-provisioner dashboard-metrics-scraper-56974995fc-hm5lc kubernetes-dashboard-ccd587f44-ghrlb
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-hbhkk metrics-server-b955d9d8-689fn storage-provisioner dashboard-metrics-scraper-56974995fc-hm5lc kubernetes-dashboard-ccd587f44-ghrlb: exit status 1 (71.986879ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-hbhkk" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-689fn" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-hm5lc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-ghrlb" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-hbhkk metrics-server-b955d9d8-689fn storage-provisioner dashboard-metrics-scraper-56974995fc-hm5lc kubernetes-dashboard-ccd587f44-ghrlb: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (543.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-g7pm5" [ddcc61b3-8f67-41dc-80a1-1794ff5c682a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:33:47.790690  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:33:56.094409  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:33:57.674089  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:34:12.322580  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:259: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:259: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
E0325 02:34:40.294545  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
start_stop_delete_test.go:259: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-03-25 02:34:40.65556269 +0000 UTC m=+4604.981691675
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 describe po kubernetes-dashboard-ccd587f44-g7pm5 -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) Non-zero exit: kubectl --context no-preload-20220325020326-262786 describe po kubernetes-dashboard-ccd587f44-g7pm5 -n kubernetes-dashboard: context deadline exceeded (1.777µs)
start_stop_delete_test.go:259: kubectl --context no-preload-20220325020326-262786 describe po kubernetes-dashboard-ccd587f44-g7pm5 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 logs kubernetes-dashboard-ccd587f44-g7pm5 -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) Non-zero exit: kubectl --context no-preload-20220325020326-262786 logs kubernetes-dashboard-ccd587f44-g7pm5 -n kubernetes-dashboard: context deadline exceeded (166ns)
start_stop_delete_test.go:259: kubectl --context no-preload-20220325020326-262786 logs kubernetes-dashboard-ccd587f44-g7pm5 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20220325020326-262786
helpers_test.go:236: (dbg) docker inspect no-preload-20220325020326-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778",
	        "Created": "2022-03-25T02:03:28.535684956Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:16:36.215228174Z",
	            "FinishedAt": "2022-03-25T02:16:34.946901711Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/hosts",
	        "LogPath": "/var/lib/docker/containers/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778/6f52c20ff4ed9d9bad3c4df830f1077f18740a95b8cd4ba5633de40edade5778-json.log",
	        "Name": "/no-preload-20220325020326-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220325020326-262786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220325020326-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee8ade62d1db904848aed77d6ea7ff5ce34b40082d77dfa98c46ada1afb80d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220325020326-262786",
	                "Source": "/var/lib/docker/volumes/no-preload-20220325020326-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220325020326-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "name.minikube.sigs.k8s.io": "no-preload-20220325020326-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6303b0899d592874666e828efb3ee58ea54941cfc0221c7bfbcf1da545710660",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49589"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49588"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49585"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49587"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49586"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6303b0899d59",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220325020326-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f52c20ff4ed",
	                        "no-preload-20220325020326-262786"
	                    ],
	                    "NetworkID": "6fbac9304f70e9e85060797caa05d374912c7ea43808a752012c2c1abc994540",
	                    "EndpointID": "1abb4df8a1d7575cd25c1506b8c27a4565a5cebbb3cb9e69805ea68a845231d8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220325020326-262786 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p no-preload-20220325020326-262786 logs -n 25: (1.011986769s)
helpers_test.go:253: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:14:54 UTC | Fri, 25 Mar 2022 02:15:49 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:49 UTC | Fri, 25 Mar 2022 02:15:50 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:23 UTC | Fri, 25 Mar 2022 02:16:24 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:24 UTC | Fri, 25 Mar 2022 02:16:25 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:25 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:35 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:46 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:47 UTC | Fri, 25 Mar 2022 02:16:48 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:48 UTC | Fri, 25 Mar 2022 02:16:51 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:51 UTC | Fri, 25 Mar 2022 02:16:52 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:19:35 UTC | Fri, 25 Mar 2022 02:19:36 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:55 UTC | Fri, 25 Mar 2022 02:22:56 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:57 UTC | Fri, 25 Mar 2022 02:22:58 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:22:59 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:23:09 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:25:38 UTC | Fri, 25 Mar 2022 02:25:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:28:37 UTC | Fri, 25 Mar 2022 02:28:38 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:28:39 UTC | Fri, 25 Mar 2022 02:28:41 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:32:11 UTC | Fri, 25 Mar 2022 02:32:11 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:23:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:23:09.537576  530227 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:23:09.537696  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537706  530227 out.go:310] Setting ErrFile to fd 2...
	I0325 02:23:09.537710  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537815  530227 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:23:09.538048  530227 out.go:304] Setting JSON to false
	I0325 02:23:09.539384  530227 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":18062,"bootTime":1648156928,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:23:09.539464  530227 start.go:125] virtualization: kvm guest
	I0325 02:23:09.542093  530227 out.go:176] * [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:23:09.543709  530227 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:23:09.542258  530227 notify.go:193] Checking for updates...
	I0325 02:23:09.545591  530227 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:23:09.547307  530227 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:09.548939  530227 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:23:09.550462  530227 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:23:09.550916  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:09.551395  530227 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:23:09.596032  530227 docker.go:136] docker version: linux-20.10.14
	I0325 02:23:09.596139  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.694688  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.627733687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:23:09.694822  530227 docker.go:253] overlay module found
	I0325 02:23:09.697284  530227 out.go:176] * Using the docker driver based on existing profile
	I0325 02:23:09.697314  530227 start.go:284] selected driver: docker
	I0325 02:23:09.697321  530227 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956
-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.697441  530227 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:23:09.697477  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.697500  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.699359  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.700002  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.794728  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.730700135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:23:09.794990  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.795026  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.797186  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.797321  530227 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:23:09.797348  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:09.797358  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:09.797376  530227 start_flags.go:304] config:
	{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNod
eRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.799343  530227 out.go:176] * Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	I0325 02:23:09.799390  530227 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:23:09.800868  530227 out.go:176] * Pulling base image ...
	I0325 02:23:09.800894  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:09.800929  530227 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:23:09.800950  530227 cache.go:57] Caching tarball of preloaded images
	I0325 02:23:09.800988  530227 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:23:09.801249  530227 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:23:09.801271  530227 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:23:09.801464  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:09.836753  530227 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:23:09.836785  530227 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:23:09.836808  530227 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:23:09.836875  530227 start.go:348] acquiring machines lock for default-k8s-different-port-20220325020956-262786: {Name:mk1740da455fcceda9a6f7400776a3a68790d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:23:09.836992  530227 start.go:352] acquired machines lock for "default-k8s-different-port-20220325020956-262786" in 82.748µs
	I0325 02:23:09.837017  530227 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:23:09.837034  530227 fix.go:55] fixHost starting: 
	I0325 02:23:09.837307  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:09.870534  530227 fix.go:108] recreateIfNeeded on default-k8s-different-port-20220325020956-262786: state=Stopped err=<nil>
	W0325 02:23:09.870565  530227 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:23:06.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:08.779908  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:09.872836  530227 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220325020956-262786" ...
	I0325 02:23:09.872897  530227 cli_runner.go:133] Run: docker start default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.277624  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:10.313461  530227 kic.go:420] container "default-k8s-different-port-20220325020956-262786" state is running.
	I0325 02:23:10.314041  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.349467  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:10.349684  530227 machine.go:88] provisioning docker machine ...
	I0325 02:23:10.349734  530227 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220325020956-262786"
	I0325 02:23:10.349784  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.385648  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:10.385835  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:10.385854  530227 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220325020956-262786 && echo "default-k8s-different-port-20220325020956-262786" | sudo tee /etc/hostname
	I0325 02:23:10.386524  530227 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33004->127.0.0.1:49594: read: connection reset by peer
	I0325 02:23:13.516245  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220325020956-262786
	
	I0325 02:23:13.516321  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.552077  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:13.552283  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:13.552307  530227 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220325020956-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220325020956-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220325020956-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:23:13.671145  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:23:13.671181  530227 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:23:13.671209  530227 ubuntu.go:177] setting up certificates
	I0325 02:23:13.671220  530227 provision.go:83] configureAuth start
	I0325 02:23:13.671284  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.707509  530227 provision.go:138] copyHostCerts
	I0325 02:23:13.707567  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:23:13.707583  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:23:13.707654  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:23:13.707752  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:23:13.707763  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:23:13.707785  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:23:13.707835  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:23:13.707843  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:23:13.707863  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:23:13.707902  530227 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220325020956-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220325020956-262786]
	I0325 02:23:13.801684  530227 provision.go:172] copyRemoteCerts
	I0325 02:23:13.801761  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:23:13.801796  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.837900  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:13.926796  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:23:13.945040  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:23:13.962557  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0325 02:23:13.980609  530227 provision.go:86] duration metric: configureAuth took 309.376559ms
	I0325 02:23:13.980640  530227 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:23:13.980824  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:13.980838  530227 machine.go:91] provisioned docker machine in 3.631132536s
	I0325 02:23:13.980846  530227 start.go:302] post-start starting for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:23:13.980853  530227 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:23:13.980892  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:23:13.980932  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.016302  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.102734  530227 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:23:14.105732  530227 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:23:14.105760  530227 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:23:14.105786  530227 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:23:14.105795  530227 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:23:14.105810  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:23:14.105871  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:23:14.105966  530227 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:23:14.106069  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:23:14.113216  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:14.131102  530227 start.go:305] post-start completed in 150.235781ms
	I0325 02:23:14.131193  530227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:23:14.131252  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.166319  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.255555  530227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:23:14.259268  530227 fix.go:57] fixHost completed within 4.422236664s
	I0325 02:23:14.259296  530227 start.go:81] releasing machines lock for "default-k8s-different-port-20220325020956-262786", held for 4.422290413s
	I0325 02:23:14.259383  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295568  530227 ssh_runner.go:195] Run: systemctl --version
	I0325 02:23:14.295622  530227 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:23:14.295624  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295670  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.331630  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.332124  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.440710  530227 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:23:14.453593  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:23:14.463531  530227 docker.go:183] disabling docker service ...
	I0325 02:23:14.463587  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:23:14.473649  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:23:14.482885  530227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:23:10.781510  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:13.279624  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:15.280218  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:14.552504  530227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:23:14.625188  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:23:14.634619  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:23:14.648987  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:23:14.662584  530227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:23:14.669661  530227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:23:14.676535  530227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:23:14.749687  530227 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:23:14.824010  530227 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:23:14.824124  530227 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:23:14.828479  530227 start.go:462] Will wait 60s for crictl version
	I0325 02:23:14.828546  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:14.854134  530227 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:23:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:23:17.779273  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:19.780082  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:21.780204  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:24.279380  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:25.901131  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:25.924531  530227 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:23:25.924599  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.944738  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.965406  530227 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:23:25.965490  530227 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:23:25.998365  530227 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 02:23:26.001776  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.013555  530227 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:23:26.013655  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:26.013730  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.037965  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.037994  530227 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:23:26.038048  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.062141  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.062166  530227 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:23:26.062213  530227 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:23:26.086309  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:26.086334  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:26.086348  530227 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:23:26.086361  530227 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220325020956-262786 NodeName:default-k8s-different-port-20220325020956-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:23:26.086482  530227 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220325020956-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:23:26.086574  530227 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220325020956-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0325 02:23:26.086621  530227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:23:26.093791  530227 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:23:26.093861  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:23:26.101104  530227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0325 02:23:26.114154  530227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:23:26.127481  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0325 02:23:26.139891  530227 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:23:26.142699  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.151979  530227 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786 for IP: 192.168.49.2
	I0325 02:23:26.152115  530227 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:23:26.152173  530227 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:23:26.152283  530227 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key
	I0325 02:23:26.152367  530227 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2
	I0325 02:23:26.152432  530227 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key
	I0325 02:23:26.152572  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:23:26.152618  530227 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:23:26.152633  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:23:26.152719  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:23:26.152762  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:23:26.152796  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:23:26.152856  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:26.153663  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:23:26.170543  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:23:26.188516  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:23:26.206252  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:23:26.223851  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:23:26.240997  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:23:26.258925  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:23:26.276782  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:23:26.293956  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:23:26.311184  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:23:26.328788  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:23:26.345739  530227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:23:26.358217  530227 ssh_runner.go:195] Run: openssl version
	I0325 02:23:26.363310  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:23:26.371143  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374386  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374446  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.379667  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:23:26.386880  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:23:26.394406  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397558  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397619  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.402576  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:23:26.409580  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:23:26.416799  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419794  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419843  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.424480  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:23:26.431093  530227 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:26.431219  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:23:26.431267  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.455469  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:26.455495  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:26.455501  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:26.455506  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:26.455510  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:26.455515  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:26.455520  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:26.455524  530227 cri.go:87] found id: ""
	I0325 02:23:26.455562  530227 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:23:26.469264  530227 cri.go:114] JSON = null
	W0325 02:23:26.469319  530227 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0325 02:23:26.469383  530227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:23:26.476380  530227 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:23:26.476423  530227 kubeadm.go:601] restartCluster start
	I0325 02:23:26.476467  530227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:23:26.483313  530227 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.484051  530227 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220325020956-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:26.484409  530227 kubeconfig.go:127] "default-k8s-different-port-20220325020956-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:23:26.485050  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:23:26.486481  530227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:23:26.493604  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.493676  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.502078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.702482  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.702567  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.712014  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.902246  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.902320  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.910978  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.103208  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.103289  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.111964  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.303121  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.303213  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.312214  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.502493  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.502598  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.511468  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.702747  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.702890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.711697  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.902931  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.903050  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.912319  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.102538  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.102634  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.111710  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.303008  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.303080  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.312078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.502221  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.502313  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.511095  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.702230  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.702303  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.711103  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.902322  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.902413  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.911515  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.102704  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.102774  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.111434  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.302770  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.302858  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.311706  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.503069  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.503150  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.512690  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.512721  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.512770  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.521635  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.521669  530227 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:23:29.521677  530227 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:23:29.521695  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:23:29.521749  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.279462  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:28.279531  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:30.280060  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:29.546890  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:29.546921  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:29.546927  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:29.546932  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:29.546937  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:29.546942  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:29.546946  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:29.546979  530227 cri.go:87] found id: ""
	I0325 02:23:29.546987  530227 cri.go:232] Stopping containers: [f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd]
	I0325 02:23:29.547049  530227 ssh_runner.go:195] Run: which crictl
	I0325 02:23:29.550389  530227 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd
	I0325 02:23:29.575922  530227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:23:29.586795  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:23:29.594440  530227 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 25 02:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2131 Mar 25 02:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:10 /etc/kubernetes/scheduler.conf
	
	I0325 02:23:29.594520  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0325 02:23:29.601472  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0325 02:23:29.608305  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.615261  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.615319  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.622383  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0325 02:23:29.629095  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.629161  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:23:29.636095  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642934  530227 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642998  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:29.687932  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.297307  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.428688  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.476555  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.528341  530227 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:23:30.528397  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.037340  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.536903  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.037557  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.537100  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.037156  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.537124  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.037604  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.536762  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.280264  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:34.779413  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:35.037573  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:35.536890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.037157  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.536733  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.598317  530227 api_server.go:71] duration metric: took 6.069979844s to wait for apiserver process to appear ...
	I0325 02:23:36.598362  530227 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:23:36.598380  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.598866  530227 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0325 02:23:37.099575  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.779484  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:38.779979  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:40.211650  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:23:40.211687  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:23:40.599053  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:40.603812  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:40.603846  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.099269  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.104481  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:41.104517  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.599902  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.604945  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0325 02:23:41.612918  530227 api_server.go:140] control plane version: v1.23.3
	I0325 02:23:41.612944  530227 api_server.go:130] duration metric: took 5.014575703s to wait for apiserver health ...
	I0325 02:23:41.612957  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:41.612965  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:41.615242  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:23:41.615325  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:23:41.619644  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:23:41.619669  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:23:41.633910  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:23:42.356822  530227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:23:42.365307  530227 system_pods.go:59] 9 kube-system pods found
	I0325 02:23:42.365343  530227 system_pods.go:61] "coredns-64897985d-9tgbz" [0d638e01-927d-4431-bf10-393b424f801a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365353  530227 system_pods.go:61] "etcd-default-k8s-different-port-20220325020956-262786" [10e10258-89d5-423b-850f-60ef4b12b83a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:23:42.365361  530227 system_pods.go:61] "kindnet-kt955" [87a42b24-60b7-415b-abc9-e574262093c0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:23:42.365368  530227 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220325020956-262786" [877f6ccd-dcc7-47ff-8574-9b9ec1b05a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:23:42.365376  530227 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220325020956-262786" [cbd16e08-169e-458a-b9c2-bcaa627475cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:23:42.365382  530227 system_pods.go:61] "kube-proxy-7cpjt" [6d1657ba-6fcd-4ee8-8293-b6aa0b7e1fb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:23:42.365387  530227 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220325020956-262786" [7b21b770-272f-4183-a1e4-6cca761e7be8] Running
	I0325 02:23:42.365395  530227 system_pods.go:61] "metrics-server-b955d9d8-h94qn" [f250996f-f9e2-41f2-ba86-6da05d627811] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365401  530227 system_pods.go:61] "storage-provisioner" [1f4e27b1-94bb-49ed-b16e-7237ce00c11a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365409  530227 system_pods.go:74] duration metric: took 8.560724ms to wait for pod list to return data ...
	I0325 02:23:42.365419  530227 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:23:42.368395  530227 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:23:42.368426  530227 node_conditions.go:123] node cpu capacity is 8
	I0325 02:23:42.368439  530227 node_conditions.go:105] duration metric: took 3.013418ms to run NodePressure ...
	I0325 02:23:42.368460  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:42.498603  530227 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503044  530227 kubeadm.go:752] kubelet initialised
	I0325 02:23:42.503087  530227 kubeadm.go:753] duration metric: took 4.396508ms waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503097  530227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:23:42.508446  530227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	I0325 02:23:44.514894  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:41.279719  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:43.779807  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:46.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:49.014836  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:46.279223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:48.279265  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:50.280221  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:51.514564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:54.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:52.780223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:55.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:56.514871  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:59.014358  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:57.280104  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:59.779435  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:01.015007  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:03.514691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:01.779945  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:03.780076  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:05.515135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:08.014925  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:06.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:08.280022  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:10.514744  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:12.514875  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:10.779769  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:13.279988  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:15.014427  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:17.514431  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:15.779111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:17.779860  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.282496  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.015198  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.514500  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.779392  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:24.779583  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:25.014188  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.015284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:29.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.280129  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:29.779139  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:32.015294  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:34.514331  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:31.779438  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:34.279292  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:36.514446  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:39.014203  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:36.280233  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:38.779288  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:41.015081  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:43.515133  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:40.779876  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:43.279836  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:45.280111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:46.014807  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:48.513848  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:47.779037  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:49.779225  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:50.514522  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:53.014610  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:52.279107  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:54.279992  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:55.514800  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:58.014633  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:56.280212  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:58.779953  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:00.514555  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:02.514600  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:04.514849  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:00.780066  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:03.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:05.280246  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:07.014221  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:09.014509  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:07.780397  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:10.279278  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:11.014691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:13.014798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:12.779414  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.279560  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.514210  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.514263  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:19.515014  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.779664  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:19.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:22.014469  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:24.015322  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:22.279477  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:24.779885  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:26.514766  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:29.014967  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:27.279254  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:29.280083  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:31.514230  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:34.014655  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:31.779951  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:34.279813  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:36.279928  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282274  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282298  519649 node_ready.go:38] duration metric: took 4m0.009544217s waiting for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:25:38.285018  519649 out.go:176] 
	W0325 02:25:38.285266  519649 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:25:38.285284  519649 out.go:241] * 
	W0325 02:25:38.286304  519649 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:25:36.513926  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:38.514284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:40.514364  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:42.514810  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:45.014814  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:47.016025  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:49.514217  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:51.514677  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:54.014149  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:56.014605  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:58.514592  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:01.014803  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:03.015044  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:05.514261  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:07.514811  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:10.014055  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:12.015163  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:14.514268  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:16.514780  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:19.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:21.513928  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:23.514819  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:26.015135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:28.514150  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:30.514433  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:32.514515  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:35.014162  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:37.014582  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:39.015192  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:41.514478  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:43.514682  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:46.014076  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:48.014564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:50.015101  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:52.514545  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:55.014470  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:57.514259  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:59.514425  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:01.514567  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:03.514798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:06.014824  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:08.015115  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:10.513891  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:12.514222  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:14.514644  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:17.014325  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:19.514620  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:22.014551  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:24.014603  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:26.015052  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:28.015506  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:30.514915  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:33.014783  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:35.514512  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:38.014733  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:40.514302  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:42.510816  530227 pod_ready.go:81] duration metric: took 4m0.002335219s waiting for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	E0325 02:27:42.510845  530227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:27:42.510866  530227 pod_ready.go:38] duration metric: took 4m0.007755725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:27:42.510971  530227 kubeadm.go:605] restartCluster took 4m16.034541089s
	W0325 02:27:42.511146  530227 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:27:42.511207  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:27:44.339219  530227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.827981438s)
	I0325 02:27:44.339290  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:27:44.348982  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:27:44.356461  530227 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:27:44.356520  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:27:44.363951  530227 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:27:44.364022  530227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:27:57.283699  530227 out.go:203]   - Generating certificates and keys ...
	I0325 02:27:57.286878  530227 out.go:203]   - Booting up control plane ...
	I0325 02:27:57.289872  530227 out.go:203]   - Configuring RBAC rules ...
	I0325 02:27:57.291696  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:27:57.291719  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:27:57.293919  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:27:57.294011  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:27:57.297810  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:27:57.297833  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:27:57.312402  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:27:58.034457  530227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:27:58.034521  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786 minikube.k8s.io/updated_at=2022_03_25T02_27_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.034522  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.101247  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.118657  530227 ops.go:34] apiserver oom_adj: -16
	I0325 02:27:58.688158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.188855  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.688734  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.188158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.688215  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.188912  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.688969  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.188876  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.688656  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.188835  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.688222  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.188154  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.688514  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.188103  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.688209  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.187993  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.688197  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.188677  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.688113  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.187906  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.688331  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.188315  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.688031  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.751277  530227 kubeadm.go:1020] duration metric: took 11.716819332s to wait for elevateKubeSystemPrivileges.
	I0325 02:28:09.751307  530227 kubeadm.go:393] StartCluster complete in 4m43.320221544s
	I0325 02:28:09.751334  530227 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:09.751483  530227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:28:09.752678  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:10.268555  530227 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220325020956-262786" rescaled to 1
	I0325 02:28:10.268633  530227 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:28:10.268674  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:28:10.270943  530227 out.go:176] * Verifying Kubernetes components...
	I0325 02:28:10.268968  530227 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:28:10.269163  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:28:10.271075  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:28:10.271163  530227 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271166  530227 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271183  530227 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271188  530227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271189  530227 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271192  530227 addons.go:165] addon metrics-server should already be in state true
	I0325 02:28:10.271207  530227 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271217  530227 addons.go:165] addon dashboard should already be in state true
	I0325 02:28:10.271232  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271251  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271164  530227 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271302  530227 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271316  530227 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:28:10.271343  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271538  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271833  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.344040  530227 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.344132  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:28:10.344144  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:28:10.344219  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.346679  530227 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:28:10.346811  530227 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.346826  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:28:10.346882  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.353938  530227 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.355562  530227 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:28:10.355640  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:28:10.355656  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:28:10.355719  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.383518  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.385223  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.391464  530227 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.391493  530227 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:28:10.391524  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.392049  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.400074  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.437891  530227 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.437915  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:28:10.437962  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.471205  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.494523  530227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:28:10.494562  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:28:10.609220  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.609589  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:28:10.609655  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:28:10.609633  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:28:10.609758  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:28:10.700787  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:28:10.700823  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:28:10.701805  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:28:10.701834  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:28:10.800343  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:28:10.800381  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:28:10.808905  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.810521  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:10.810550  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:28:10.899094  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:28:10.899126  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:28:10.905212  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:11.003467  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:28:11.003501  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:28:11.102902  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:28:11.102933  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:28:11.203761  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:28:11.203793  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:28:11.210549  530227 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0325 02:28:11.294868  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:28:11.294905  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:28:11.407000  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.407036  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:28:11.594579  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.993918  530227 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08865456s)
	I0325 02:28:11.994023  530227 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:12.406348  530227 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0325 02:28:12.406384  530227 addons.go:417] enableAddons completed in 2.137426118s
	I0325 02:28:12.501452  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:15.001678  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:17.002236  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:19.501942  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:22.002181  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:24.002264  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:26.501667  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:29.002078  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:31.501585  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:34.001805  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:36.002041  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:38.002339  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:40.501697  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:43.001641  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:45.501691  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:48.001669  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:50.001733  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:52.001838  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:54.002159  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:56.501779  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:59.001558  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:01.002011  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:03.501394  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:06.001566  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:08.002174  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:10.501618  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:12.502189  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:15.005102  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:17.501055  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:19.501433  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:22.001489  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:24.001553  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:26.001738  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:28.501727  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:31.001897  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:33.002070  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:35.501917  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:38.001428  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:40.001672  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:42.002147  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:44.501313  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:47.001730  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:49.002193  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:51.501480  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:53.502170  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:56.001749  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:58.501848  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:01.001969  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:03.002165  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:05.501492  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:07.501983  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:10.001162  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:12.001919  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:14.001948  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:16.501197  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:18.502117  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:20.502225  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:23.002141  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:25.502083  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:28.001876  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:30.002027  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:32.503056  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:35.001423  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:37.501534  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:40.002190  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:42.502274  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:45.001432  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:47.001999  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:49.501359  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:52.001784  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:54.501845  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:56.502160  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:59.001872  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:01.501682  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:04.002218  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:06.002445  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:08.501995  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:10.502365  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:13.001863  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:15.002027  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:17.003337  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:19.501499  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:21.501688  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:24.001679  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:26.502090  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:29.001645  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:31.501716  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:33.501846  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:36.001006  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:38.002402  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:40.501392  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:42.502189  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:45.002082  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:47.500971  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:49.502314  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:52.001990  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:54.501597  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:57.001789  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:59.002549  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:01.003204  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:03.501518  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:05.502059  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:07.502226  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:10.001697  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:10.504266  530227 node_ready.go:38] duration metric: took 4m0.009695209s waiting for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:32:10.507459  530227 out.go:176] 
	W0325 02:32:10.507629  530227 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:32:10.507649  530227 out.go:241] * 
	W0325 02:32:10.508449  530227 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8e825f59ea51d       6de166512aa22       58 seconds ago      Running             kindnet-cni               4                   884eb4334953e
	d1081550267e3       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   884eb4334953e
	d6f58f8b25dd7       abbcf459c7739       13 minutes ago      Running             kube-proxy                0                   ca7a34b0094a0
	53b9a35e53a1f       25f8c7f3da61c       13 minutes ago      Running             etcd                      2                   05bf41c7a933c
	86d75965d4f3a       4a82fd4414312       13 minutes ago      Running             kube-scheduler            2                   803cf0205f95a
	b4eeb80b5bb17       9f243260866d4       13 minutes ago      Running             kube-controller-manager   2                   59e731f150549
	b82e990d40b98       ce3b8500a91ff       13 minutes ago      Running             kube-apiserver            2                   0dea622433793
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:16:36 UTC, end at Fri 2022-03-25 02:34:41 UTC. --
	Mar 25 02:27:00 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:27:00.540499558Z" level=info msg="RemoveContainer for \"ab3bef1048aec67b49b09949f76284299ab614b53e9aac670758dd1698bbffd5\" returns successfully"
	Mar 25 02:27:12 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:27:12.924528710Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:27:12 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:27:12.937049979Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"754c5235a9861e6a1cf9015cbb5738a6a2ba7f0c31f7c617bc4573d77dfa6bc4\""
	Mar 25 02:27:12 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:27:12.937467901Z" level=info msg="StartContainer for \"754c5235a9861e6a1cf9015cbb5738a6a2ba7f0c31f7c617bc4573d77dfa6bc4\""
	Mar 25 02:27:13 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:27:13.088388046Z" level=info msg="StartContainer for \"754c5235a9861e6a1cf9015cbb5738a6a2ba7f0c31f7c617bc4573d77dfa6bc4\" returns successfully"
	Mar 25 02:29:53 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:29:53.330652253Z" level=info msg="shim disconnected" id=754c5235a9861e6a1cf9015cbb5738a6a2ba7f0c31f7c617bc4573d77dfa6bc4
	Mar 25 02:29:53 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:29:53.330720264Z" level=warning msg="cleaning up after shim disconnected" id=754c5235a9861e6a1cf9015cbb5738a6a2ba7f0c31f7c617bc4573d77dfa6bc4 namespace=k8s.io
	Mar 25 02:29:53 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:29:53.330731984Z" level=info msg="cleaning up dead shim"
	Mar 25 02:29:53 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:29:53.341731252Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:29:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4274\n"
	Mar 25 02:29:53 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:29:53.829160827Z" level=info msg="RemoveContainer for \"e236231390656f33bbcd559aed98e81d9cf6191805f4007c8ab3fc429df08e37\""
	Mar 25 02:29:53 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:29:53.834349963Z" level=info msg="RemoveContainer for \"e236231390656f33bbcd559aed98e81d9cf6191805f4007c8ab3fc429df08e37\" returns successfully"
	Mar 25 02:30:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:30:19.925766128Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:30:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:30:19.940141350Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"d1081550267e3de004a5508343510f095a0adfa1f256f99a1013aec3d6b0757c\""
	Mar 25 02:30:19 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:30:19.940689395Z" level=info msg="StartContainer for \"d1081550267e3de004a5508343510f095a0adfa1f256f99a1013aec3d6b0757c\""
	Mar 25 02:30:20 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:30:20.188805064Z" level=info msg="StartContainer for \"d1081550267e3de004a5508343510f095a0adfa1f256f99a1013aec3d6b0757c\" returns successfully"
	Mar 25 02:33:00 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:00.429105572Z" level=info msg="shim disconnected" id=d1081550267e3de004a5508343510f095a0adfa1f256f99a1013aec3d6b0757c
	Mar 25 02:33:00 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:00.429185205Z" level=warning msg="cleaning up after shim disconnected" id=d1081550267e3de004a5508343510f095a0adfa1f256f99a1013aec3d6b0757c namespace=k8s.io
	Mar 25 02:33:00 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:00.429208407Z" level=info msg="cleaning up dead shim"
	Mar 25 02:33:00 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:00.440109815Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:33:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4375\n"
	Mar 25 02:33:01 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:01.139825230Z" level=info msg="RemoveContainer for \"754c5235a9861e6a1cf9015cbb5738a6a2ba7f0c31f7c617bc4573d77dfa6bc4\""
	Mar 25 02:33:01 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:01.144648715Z" level=info msg="RemoveContainer for \"754c5235a9861e6a1cf9015cbb5738a6a2ba7f0c31f7c617bc4573d77dfa6bc4\" returns successfully"
	Mar 25 02:33:42 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:42.924580414Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Mar 25 02:33:42 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:42.942569608Z" level=info msg="CreateContainer within sandbox \"884eb4334953e45ad0fdeb92e81f30a11f606e7a1eed682c7976766c11f4b814\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"8e825f59ea51dd5aa983a8085154effe46b9f55deeed06037531dac02a6707ea\""
	Mar 25 02:33:42 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:42.943114533Z" level=info msg="StartContainer for \"8e825f59ea51dd5aa983a8085154effe46b9f55deeed06037531dac02a6707ea\""
	Mar 25 02:33:43 no-preload-20220325020326-262786 containerd[345]: time="2022-03-25T02:33:43.188663878Z" level=info msg="StartContainer for \"8e825f59ea51dd5aa983a8085154effe46b9f55deeed06037531dac02a6707ea\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220325020326-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220325020326-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=no-preload-20220325020326-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_21_24_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:21:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220325020326-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:34:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:31:51 +0000   Fri, 25 Mar 2022 02:21:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:31:51 +0000   Fri, 25 Mar 2022 02:21:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:31:51 +0000   Fri, 25 Mar 2022 02:21:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:31:51 +0000   Fri, 25 Mar 2022 02:21:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220325020326-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                38254055-e8ea-4285-a000-185429061264
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.4-rc.0
	  Kube-Proxy Version:         v1.23.4-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220325020326-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-pqqft                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-no-preload-20220325020326-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-20220325020326-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-l5l9q                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-20220325020326-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 13m   kube-proxy  
	  Normal  Starting                 13m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet     Node no-preload-20220325020326-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [53b9a35e53a1f11832bf97ad9473cce2d2bb0222ec7087b5d2bb35f4d7e6ed23] <==
	* {"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-25T02:21:17.717Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220325020326-262786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:21:18.707Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:21:18.708Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:21:18.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-03-25T02:21:18.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-25T02:31:18.723Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":644}
	{"level":"info","ts":"2022-03-25T02:31:18.724Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":644,"took":"664.653µs"}
	
	* 
	* ==> kernel <==
	*  02:34:41 up  5:12,  0 users,  load average: 0.08, 0.17, 0.57
	Linux no-preload-20220325020326-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b82e990d40b983d4de17f25adbeb2fe89b571c39a24ee87a05b194ed981e9d4b] <==
	* I0325 02:24:40.403227       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:26:21.533272       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:26:21.533359       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:26:21.533394       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:27:21.533727       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:27:21.533782       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:27:21.533791       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:29:21.534216       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:29:21.534303       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:29:21.534312       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:31:21.538302       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:31:21.538396       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:31:21.538404       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:32:21.539049       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:32:21.539141       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:32:21.539150       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:34:21.539578       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:34:21.539668       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:34:21.539677       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [b4eeb80b5bb17a15e68bd5bb2307122772dcbffee65dfef90ef3c519989b81c6] <==
	* W0325 02:28:37.602279       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:29:07.228270       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:29:07.616060       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:29:37.240753       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:29:37.630296       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:30:07.254592       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:30:07.645913       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:30:37.267920       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:30:37.661370       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:31:07.279115       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:31:07.675541       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:31:37.292698       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:31:37.689487       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:32:07.305757       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:32:07.705538       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:32:37.317552       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:32:37.720680       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:33:07.328339       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:33:07.735192       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:33:37.339678       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:33:37.750022       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:34:07.350714       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:34:07.765556       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:34:37.359918       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:34:37.779646       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d6f58f8b25dd7a12dcd4bbe7b98e14edf89e41d8ed965c5f4afe0581e5dd0409] <==
	* I0325 02:21:38.177110       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0325 02:21:38.177160       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0325 02:21:38.177191       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:21:38.199792       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:21:38.199842       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:21:38.199853       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:21:38.199881       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:21:38.200324       1 server.go:656] "Version info" version="v1.23.4-rc.0"
	I0325 02:21:38.200994       1 config.go:317] "Starting service config controller"
	I0325 02:21:38.201026       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:21:38.201438       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:21:38.201453       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:21:38.301474       1 shared_informer.go:247] Caches are synced for service config 
	I0325 02:21:38.301561       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [86d75965d4f3ab5cae35b8697279da65f0af1b8e291f65a2f57058b7c1595521] <==
	* E0325 02:21:20.686752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:21:20.686680       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0325 02:21:20.686797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0325 02:21:20.686791       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0325 02:21:20.686834       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:21:20.686848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:21:20.686850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:21:20.687001       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:21:20.687026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:21:20.687025       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:21:20.687064       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:21:21.494111       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0325 02:21:21.494161       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0325 02:21:21.514545       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:21:21.514615       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0325 02:21:21.516409       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:21:21.516444       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:21:21.518452       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:21:21.518502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:21:21.523480       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0325 02:21:21.523518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:21:21.526456       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:21:21.526482       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:21:23.057524       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0325 02:21:23.605054       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:16:36 UTC, end at Fri 2022-03-25 02:34:42 UTC. --
	Mar 25 02:33:04 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:04.159929    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:09 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:09.160693    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:13 no-preload-20220325020326-262786 kubelet[2906]: I0325 02:33:13.921480    2906 scope.go:110] "RemoveContainer" containerID="d1081550267e3de004a5508343510f095a0adfa1f256f99a1013aec3d6b0757c"
	Mar 25 02:33:13 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:13.921748    2906 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-pqqft_kube-system(4bc6dee7-b939-402e-bc62-74ce9f083e11)\"" pod="kube-system/kindnet-pqqft" podUID=4bc6dee7-b939-402e-bc62-74ce9f083e11
	Mar 25 02:33:14 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:14.161359    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:19 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:19.162870    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:24 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:24.163920    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:27 no-preload-20220325020326-262786 kubelet[2906]: I0325 02:33:27.921800    2906 scope.go:110] "RemoveContainer" containerID="d1081550267e3de004a5508343510f095a0adfa1f256f99a1013aec3d6b0757c"
	Mar 25 02:33:27 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:27.922089    2906 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-pqqft_kube-system(4bc6dee7-b939-402e-bc62-74ce9f083e11)\"" pod="kube-system/kindnet-pqqft" podUID=4bc6dee7-b939-402e-bc62-74ce9f083e11
	Mar 25 02:33:29 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:29.165246    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:34 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:34.165974    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:39 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:39.167159    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:42 no-preload-20220325020326-262786 kubelet[2906]: I0325 02:33:42.922285    2906 scope.go:110] "RemoveContainer" containerID="d1081550267e3de004a5508343510f095a0adfa1f256f99a1013aec3d6b0757c"
	Mar 25 02:33:44 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:44.168635    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:49 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:49.169910    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:54 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:54.171456    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:33:59 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:33:59.172648    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:34:04 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:34:04.173991    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:34:09 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:34:09.174996    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:34:14 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:34:14.176621    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:34:19 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:34:19.178042    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:34:24 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:34:24.179058    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:34:29 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:34:29.180004    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:34:34 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:34:34.181235    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:34:39 no-preload-20220325020326-262786 kubelet[2906]: E0325 02:34:39.182241    2906 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-kj92r metrics-server-b955d9d8-9fbns storage-provisioner dashboard-metrics-scraper-56974995fc-wpjtl kubernetes-dashboard-ccd587f44-g7pm5
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-kj92r metrics-server-b955d9d8-9fbns storage-provisioner dashboard-metrics-scraper-56974995fc-wpjtl kubernetes-dashboard-ccd587f44-g7pm5
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-kj92r metrics-server-b955d9d8-9fbns storage-provisioner dashboard-metrics-scraper-56974995fc-wpjtl kubernetes-dashboard-ccd587f44-g7pm5: exit status 1 (69.376126ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-kj92r" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-9fbns" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-wpjtl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-g7pm5" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20220325020326-262786 describe pod coredns-64897985d-kj92r metrics-server-b955d9d8-9fbns storage-provisioner dashboard-metrics-scraper-56974995fc-wpjtl kubernetes-dashboard-ccd587f44-g7pm5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-ghrlb" [17f3e78f-d61a-4a71-9aa5-a35708fe766d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0325 02:32:23.465324  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 02:32:37.498611  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220325014920-262786/client.crt: no such file or directory
E0325 02:32:43.341347  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 02:32:50.401891  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:40:10.840180  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:41:00.415293  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:41:03.978801  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220325020326-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:41:05.105994  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0325 02:41:12.032730  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:259: ***** TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:259: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
start_stop_delete_test.go:259: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-03-25 02:41:13.068406826 +0000 UTC m=+4997.394535800
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 describe po kubernetes-dashboard-ccd587f44-ghrlb -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220325020956-262786 describe po kubernetes-dashboard-ccd587f44-ghrlb -n kubernetes-dashboard: context deadline exceeded (2.093µs)
start_stop_delete_test.go:259: kubectl --context default-k8s-different-port-20220325020956-262786 describe po kubernetes-dashboard-ccd587f44-ghrlb -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 logs kubernetes-dashboard-ccd587f44-ghrlb -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220325020956-262786 logs kubernetes-dashboard-ccd587f44-ghrlb -n kubernetes-dashboard: context deadline exceeded (190ns)
start_stop_delete_test.go:259: kubectl --context default-k8s-different-port-20220325020956-262786 logs kubernetes-dashboard-ccd587f44-ghrlb -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20220325020956-262786
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20220325020956-262786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4",
	        "Created": "2022-03-25T02:10:07.830065737Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 530511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-25T02:23:10.269638118Z",
	            "FinishedAt": "2022-03-25T02:23:08.930200628Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/hosts",
	        "LogPath": "/var/lib/docker/containers/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4/0e271f66fa8de6fefccaea4b09cce21ad3f284e463eabb0cf972a5ef1c096ec4-json.log",
	        "Name": "/default-k8s-different-port-20220325020956-262786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220325020956-262786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220325020956-262786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e4a6198d4efd2bc9176f38a6f574f92650f459f2244c3a9305b1ea705aca873/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220325020956-262786",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220325020956-262786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220325020956-262786",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220325020956-262786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e63ced8335d7e5f521c6a6ba8d6908625d99a772df361d90fbcab337a78b772",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49594"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49593"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49590"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49592"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49591"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6e63ced8335d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220325020956-262786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e271f66fa8d",
	                        "default-k8s-different-port-20220325020956-262786"
	                    ],
	                    "NetworkID": "c5c0224540019d877be5e36bfc556dc0a2d83980f6e5b563be26e38eaad27a38",
	                    "EndpointID": "b15913a0fa2c6356af7d5afde8a1a2d1e35583bc3ab4949729b25ba92bed5481",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220325020956-262786 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20220325020956-262786 logs -n 25: (1.036601302s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:15:50 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:10 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:21 UTC | Fri, 25 Mar 2022 02:16:22 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:23 UTC | Fri, 25 Mar 2022 02:16:24 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:24 UTC | Fri, 25 Mar 2022 02:16:25 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:25 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:35 UTC | Fri, 25 Mar 2022 02:16:35 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20220325021454-262786 --memory=2200          | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:10 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:45 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:45 UTC | Fri, 25 Mar 2022 02:16:46 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:47 UTC | Fri, 25 Mar 2022 02:16:48 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:48 UTC | Fri, 25 Mar 2022 02:16:51 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220325021454-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:16:51 UTC | Fri, 25 Mar 2022 02:16:52 UTC |
	|         | newest-cni-20220325021454-262786                           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:19:35 UTC | Fri, 25 Mar 2022 02:19:36 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:55 UTC | Fri, 25 Mar 2022 02:22:56 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:57 UTC | Fri, 25 Mar 2022 02:22:58 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:22:59 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:22:59 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:23:09 UTC | Fri, 25 Mar 2022 02:23:09 UTC |
	|         | default-k8s-different-port-20220325020956-262786           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:25:38 UTC | Fri, 25 Mar 2022 02:25:39 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20220325015306-262786                      | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:28:37 UTC | Fri, 25 Mar 2022 02:28:38 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20220325015306-262786            | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:28:39 UTC | Fri, 25 Mar 2022 02:28:41 UTC |
	|         | old-k8s-version-20220325015306-262786                      |                                                  |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220325020956-262786           | default-k8s-different-port-20220325020956-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:32:11 UTC | Fri, 25 Mar 2022 02:32:11 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20220325020326-262786                           | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:34:41 UTC | Fri, 25 Mar 2022 02:34:42 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220325020326-262786                 | jenkins | v1.25.2 | Fri, 25 Mar 2022 02:34:42 UTC | Fri, 25 Mar 2022 02:34:45 UTC |
	|         | no-preload-20220325020326-262786                           |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 02:23:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 02:23:09.537576  530227 out.go:297] Setting OutFile to fd 1 ...
	I0325 02:23:09.537696  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537706  530227 out.go:310] Setting ErrFile to fd 2...
	I0325 02:23:09.537710  530227 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 02:23:09.537815  530227 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 02:23:09.538048  530227 out.go:304] Setting JSON to false
	I0325 02:23:09.539384  530227 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":18062,"bootTime":1648156928,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 02:23:09.539464  530227 start.go:125] virtualization: kvm guest
	I0325 02:23:09.542093  530227 out.go:176] * [default-k8s-different-port-20220325020956-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 02:23:09.543709  530227 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 02:23:09.542258  530227 notify.go:193] Checking for updates...
	I0325 02:23:09.545591  530227 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 02:23:09.547307  530227 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:09.548939  530227 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 02:23:09.550462  530227 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 02:23:09.550916  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:09.551395  530227 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 02:23:09.596032  530227 docker.go:136] docker version: linux-20.10.14
	I0325 02:23:09.596139  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.694688  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.627733687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 02:23:09.694822  530227 docker.go:253] overlay module found
	I0325 02:23:09.697284  530227 out.go:176] * Using the docker driver based on existing profile
	I0325 02:23:09.697314  530227 start.go:284] selected driver: docker
	I0325 02:23:09.697321  530227 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956
-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.697441  530227 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 02:23:09.697477  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.697500  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.699359  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.700002  530227 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 02:23:09.794728  530227 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 02:23:09.730700135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0325 02:23:09.794990  530227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 02:23:09.795026  530227 out.go:241] ! Your cgroup does not allow setting memory.
	I0325 02:23:09.797186  530227 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 02:23:09.797321  530227 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0325 02:23:09.797348  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:09.797358  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:09.797376  530227 start_flags.go:304] config:
	{Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNod
eRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:09.799343  530227 out.go:176] * Starting control plane node default-k8s-different-port-20220325020956-262786 in cluster default-k8s-different-port-20220325020956-262786
	I0325 02:23:09.799390  530227 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 02:23:09.800868  530227 out.go:176] * Pulling base image ...
	I0325 02:23:09.800894  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:09.800929  530227 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 02:23:09.800950  530227 cache.go:57] Caching tarball of preloaded images
	I0325 02:23:09.800988  530227 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 02:23:09.801249  530227 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0325 02:23:09.801271  530227 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 02:23:09.801464  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:09.836753  530227 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 02:23:09.836785  530227 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 02:23:09.836808  530227 cache.go:208] Successfully downloaded all kic artifacts
	I0325 02:23:09.836875  530227 start.go:348] acquiring machines lock for default-k8s-different-port-20220325020956-262786: {Name:mk1740da455fcceda9a6f7400776a3a68790d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0325 02:23:09.836992  530227 start.go:352] acquired machines lock for "default-k8s-different-port-20220325020956-262786" in 82.748µs
	I0325 02:23:09.837017  530227 start.go:94] Skipping create...Using existing machine configuration
	I0325 02:23:09.837034  530227 fix.go:55] fixHost starting: 
	I0325 02:23:09.837307  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:09.870534  530227 fix.go:108] recreateIfNeeded on default-k8s-different-port-20220325020956-262786: state=Stopped err=<nil>
	W0325 02:23:09.870565  530227 fix.go:134] unexpected machine state, will restart: <nil>
	I0325 02:23:06.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:08.779908  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:09.872836  530227 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220325020956-262786" ...
	I0325 02:23:09.872897  530227 cli_runner.go:133] Run: docker start default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.277624  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:23:10.313461  530227 kic.go:420] container "default-k8s-different-port-20220325020956-262786" state is running.
	I0325 02:23:10.314041  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.349467  530227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/config.json ...
	I0325 02:23:10.349684  530227 machine.go:88] provisioning docker machine ...
	I0325 02:23:10.349734  530227 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220325020956-262786"
	I0325 02:23:10.349784  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:10.385648  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:10.385835  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:10.385854  530227 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220325020956-262786 && echo "default-k8s-different-port-20220325020956-262786" | sudo tee /etc/hostname
	I0325 02:23:10.386524  530227 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33004->127.0.0.1:49594: read: connection reset by peer
	I0325 02:23:13.516245  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220325020956-262786
	
	I0325 02:23:13.516321  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.552077  530227 main.go:130] libmachine: Using SSH client type: native
	I0325 02:23:13.552283  530227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49594 <nil> <nil>}
	I0325 02:23:13.552307  530227 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220325020956-262786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220325020956-262786/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220325020956-262786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0325 02:23:13.671145  530227 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0325 02:23:13.671181  530227 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0325 02:23:13.671209  530227 ubuntu.go:177] setting up certificates
	I0325 02:23:13.671220  530227 provision.go:83] configureAuth start
	I0325 02:23:13.671284  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.707509  530227 provision.go:138] copyHostCerts
	I0325 02:23:13.707567  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0325 02:23:13.707583  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0325 02:23:13.707654  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0325 02:23:13.707752  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0325 02:23:13.707763  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0325 02:23:13.707785  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0325 02:23:13.707835  530227 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0325 02:23:13.707843  530227 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0325 02:23:13.707863  530227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0325 02:23:13.707902  530227 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220325020956-262786 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220325020956-262786]
	I0325 02:23:13.801684  530227 provision.go:172] copyRemoteCerts
	I0325 02:23:13.801761  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0325 02:23:13.801796  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:13.837900  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:13.926796  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0325 02:23:13.945040  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0325 02:23:13.962557  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0325 02:23:13.980609  530227 provision.go:86] duration metric: configureAuth took 309.376559ms
	I0325 02:23:13.980640  530227 ubuntu.go:193] setting minikube options for container-runtime
	I0325 02:23:13.980824  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:23:13.980838  530227 machine.go:91] provisioned docker machine in 3.631132536s
	I0325 02:23:13.980846  530227 start.go:302] post-start starting for "default-k8s-different-port-20220325020956-262786" (driver="docker")
	I0325 02:23:13.980853  530227 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0325 02:23:13.980892  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0325 02:23:13.980932  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.016302  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.102734  530227 ssh_runner.go:195] Run: cat /etc/os-release
	I0325 02:23:14.105732  530227 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0325 02:23:14.105760  530227 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0325 02:23:14.105786  530227 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0325 02:23:14.105795  530227 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0325 02:23:14.105810  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0325 02:23:14.105871  530227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0325 02:23:14.105966  530227 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
	I0325 02:23:14.106069  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0325 02:23:14.113216  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:14.131102  530227 start.go:305] post-start completed in 150.235781ms
	I0325 02:23:14.131193  530227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 02:23:14.131252  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.166319  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.255555  530227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0325 02:23:14.259268  530227 fix.go:57] fixHost completed within 4.422236664s
	I0325 02:23:14.259296  530227 start.go:81] releasing machines lock for "default-k8s-different-port-20220325020956-262786", held for 4.422290413s
	I0325 02:23:14.259383  530227 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295568  530227 ssh_runner.go:195] Run: systemctl --version
	I0325 02:23:14.295622  530227 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0325 02:23:14.295624  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.295670  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:23:14.331630  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.332124  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:23:14.440710  530227 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0325 02:23:14.453593  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0325 02:23:14.463531  530227 docker.go:183] disabling docker service ...
	I0325 02:23:14.463587  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0325 02:23:14.473649  530227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0325 02:23:14.482885  530227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0325 02:23:10.781510  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:13.279624  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:15.280218  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:14.552504  530227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0325 02:23:14.625188  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0325 02:23:14.634619  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0325 02:23:14.648987  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0325 02:23:14.662584  530227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0325 02:23:14.669661  530227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0325 02:23:14.676535  530227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0325 02:23:14.749687  530227 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0325 02:23:14.824010  530227 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0325 02:23:14.824124  530227 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0325 02:23:14.828479  530227 start.go:462] Will wait 60s for crictl version
	I0325 02:23:14.828546  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:14.854134  530227 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-03-25T02:23:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0325 02:23:17.779273  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:19.780082  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:21.780204  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:24.279380  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:25.901131  530227 ssh_runner.go:195] Run: sudo crictl version
	I0325 02:23:25.924531  530227 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0325 02:23:25.924599  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.944738  530227 ssh_runner.go:195] Run: containerd --version
	I0325 02:23:25.965406  530227 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
	I0325 02:23:25.965490  530227 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20220325020956-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0325 02:23:25.998365  530227 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0325 02:23:26.001776  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.013555  530227 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0325 02:23:26.013655  530227 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 02:23:26.013730  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.037965  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.037994  530227 containerd.go:526] Images already preloaded, skipping extraction
	I0325 02:23:26.038048  530227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0325 02:23:26.062141  530227 containerd.go:612] all images are preloaded for containerd runtime.
	I0325 02:23:26.062166  530227 cache_images.go:84] Images are preloaded, skipping loading
	I0325 02:23:26.062213  530227 ssh_runner.go:195] Run: sudo crictl info
	I0325 02:23:26.086309  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:26.086334  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:26.086348  530227 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0325 02:23:26.086361  530227 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220325020956-262786 NodeName:default-k8s-different-port-20220325020956-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0325 02:23:26.086482  530227 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220325020956-262786"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0325 02:23:26.086574  530227 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220325020956-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0325 02:23:26.086621  530227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0325 02:23:26.093791  530227 binaries.go:44] Found k8s binaries, skipping transfer
	I0325 02:23:26.093861  530227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0325 02:23:26.101104  530227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (593 bytes)
	I0325 02:23:26.114154  530227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0325 02:23:26.127481  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0325 02:23:26.139891  530227 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0325 02:23:26.142699  530227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0325 02:23:26.151979  530227 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786 for IP: 192.168.49.2
	I0325 02:23:26.152115  530227 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0325 02:23:26.152173  530227 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0325 02:23:26.152283  530227 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/client.key
	I0325 02:23:26.152367  530227 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key.dd3b5fb2
	I0325 02:23:26.152432  530227 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key
	I0325 02:23:26.152572  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
	W0325 02:23:26.152618  530227 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
	I0325 02:23:26.152633  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
	I0325 02:23:26.152719  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0325 02:23:26.152762  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0325 02:23:26.152796  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0325 02:23:26.152856  530227 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
	I0325 02:23:26.153663  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0325 02:23:26.170543  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0325 02:23:26.188516  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0325 02:23:26.206252  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220325020956-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0325 02:23:26.223851  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0325 02:23:26.240997  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0325 02:23:26.258925  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0325 02:23:26.276782  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0325 02:23:26.293956  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0325 02:23:26.311184  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
	I0325 02:23:26.328788  530227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
	I0325 02:23:26.345739  530227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0325 02:23:26.358217  530227 ssh_runner.go:195] Run: openssl version
	I0325 02:23:26.363310  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
	I0325 02:23:26.371143  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374386  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.374446  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
	I0325 02:23:26.379667  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
	I0325 02:23:26.386880  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0325 02:23:26.394406  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397558  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.397619  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0325 02:23:26.402576  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0325 02:23:26.409580  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
	I0325 02:23:26.416799  530227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419794  530227 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.419843  530227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
	I0325 02:23:26.424480  530227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
	I0325 02:23:26.431093  530227 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220325020956-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:default-k8s-different-port-20220325020956-262786 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 02:23:26.431219  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0325 02:23:26.431267  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.455469  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:26.455495  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:26.455501  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:26.455506  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:26.455510  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:26.455515  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:26.455520  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:26.455524  530227 cri.go:87] found id: ""
	I0325 02:23:26.455562  530227 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0325 02:23:26.469264  530227 cri.go:114] JSON = null
	W0325 02:23:26.469319  530227 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0325 02:23:26.469383  530227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0325 02:23:26.476380  530227 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0325 02:23:26.476423  530227 kubeadm.go:601] restartCluster start
	I0325 02:23:26.476467  530227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0325 02:23:26.483313  530227 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.484051  530227 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220325020956-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:23:26.484409  530227 kubeconfig.go:127] "default-k8s-different-port-20220325020956-262786" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0325 02:23:26.485050  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:23:26.486481  530227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0325 02:23:26.493604  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.493676  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.502078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.702482  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.702567  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.712014  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:26.902246  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:26.902320  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:26.910978  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.103208  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.103289  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.111964  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.303121  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.303213  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.312214  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.502493  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.502598  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.511468  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.702747  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.702890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.711697  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:27.902931  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:27.903050  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:27.912319  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.102538  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.102634  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.111710  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.303008  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.303080  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.312078  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.502221  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.502313  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.511095  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.702230  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.702303  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.711103  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:28.902322  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:28.902413  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:28.911515  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.102704  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.102774  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.111434  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.302770  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.302858  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.311706  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.503069  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.503150  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.512690  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.512721  530227 api_server.go:165] Checking apiserver status ...
	I0325 02:23:29.512770  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0325 02:23:29.521635  530227 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.521669  530227 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0325 02:23:29.521677  530227 kubeadm.go:1067] stopping kube-system containers ...
	I0325 02:23:29.521695  530227 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0325 02:23:29.521749  530227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0325 02:23:26.279462  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:28.279531  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:30.280060  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:29.546890  530227 cri.go:87] found id: "f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db"
	I0325 02:23:29.546921  530227 cri.go:87] found id: "246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739"
	I0325 02:23:29.546927  530227 cri.go:87] found id: "dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b"
	I0325 02:23:29.546932  530227 cri.go:87] found id: "21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73"
	I0325 02:23:29.546937  530227 cri.go:87] found id: "bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7"
	I0325 02:23:29.546942  530227 cri.go:87] found id: "6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182"
	I0325 02:23:29.546946  530227 cri.go:87] found id: "c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd"
	I0325 02:23:29.546979  530227 cri.go:87] found id: ""
	I0325 02:23:29.546987  530227 cri.go:232] Stopping containers: [f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd]
	I0325 02:23:29.547049  530227 ssh_runner.go:195] Run: which crictl
	I0325 02:23:29.550389  530227 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f5e3884eab77792b0ecc5c7eaebf1a1f9e2b892d8726226cef726625722541db 246eba7f6d94c4ec0678e13ec1d1d66f7aa83399eaac2fd774bb566185fa2739 dd3e42aaf3dd808002a7d521d827311c8dac9a05e2a4c16f575d3cf06075312b 21482958b68c27e0814894bfe7f5b8e0234b62ee60b73dc5bfe8376bf1d86b73 bc6cf9877becc4202b89ab289a3e839b7dd6be0bef969bc4021ddfc38d5797f7 6a469f6f4de500ffddff8ccff7984833c5b41260ba7466a9bad5c5ec4082b182 c154a93ac7de2583fc48eb55298902b5ed5f8bb08bbf6c89c5b07e9a0e9f5dcd
	I0325 02:23:29.575922  530227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0325 02:23:29.586795  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:23:29.594440  530227 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 25 02:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 25 02:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2131 Mar 25 02:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 25 02:10 /etc/kubernetes/scheduler.conf
	
	I0325 02:23:29.594520  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0325 02:23:29.601472  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0325 02:23:29.608305  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.615261  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.615319  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0325 02:23:29.622383  530227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0325 02:23:29.629095  530227 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0325 02:23:29.629161  530227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0325 02:23:29.636095  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642934  530227 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0325 02:23:29.642998  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:29.687932  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.297307  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.428688  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.476555  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:30.528341  530227 api_server.go:51] waiting for apiserver process to appear ...
	I0325 02:23:30.528397  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.037340  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:31.536903  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.037557  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.537100  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.037156  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:33.537124  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.037604  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:34.536762  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:32.280264  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:34.779413  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:35.037573  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:35.536890  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.037157  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.536733  530227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 02:23:36.598317  530227 api_server.go:71] duration metric: took 6.069979844s to wait for apiserver process to appear ...
	I0325 02:23:36.598362  530227 api_server.go:87] waiting for apiserver healthz status ...
	I0325 02:23:36.598380  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.598866  530227 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0325 02:23:37.099575  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:36.779484  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:38.779979  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:40.211650  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0325 02:23:40.211687  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0325 02:23:40.599053  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:40.603812  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:40.603846  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.099269  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.104481  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0325 02:23:41.104517  530227 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0325 02:23:41.599902  530227 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0325 02:23:41.604945  530227 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0325 02:23:41.612918  530227 api_server.go:140] control plane version: v1.23.3
	I0325 02:23:41.612944  530227 api_server.go:130] duration metric: took 5.014575703s to wait for apiserver health ...
	I0325 02:23:41.612957  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:23:41.612965  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:23:41.615242  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:23:41.615325  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:23:41.619644  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:23:41.619669  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:23:41.633910  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:23:42.356822  530227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0325 02:23:42.365307  530227 system_pods.go:59] 9 kube-system pods found
	I0325 02:23:42.365343  530227 system_pods.go:61] "coredns-64897985d-9tgbz" [0d638e01-927d-4431-bf10-393b424f801a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365353  530227 system_pods.go:61] "etcd-default-k8s-different-port-20220325020956-262786" [10e10258-89d5-423b-850f-60ef4b12b83a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0325 02:23:42.365361  530227 system_pods.go:61] "kindnet-kt955" [87a42b24-60b7-415b-abc9-e574262093c0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0325 02:23:42.365368  530227 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220325020956-262786" [877f6ccd-dcc7-47ff-8574-9b9ec1b05a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0325 02:23:42.365376  530227 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220325020956-262786" [cbd16e08-169e-458a-b9c2-bcaa627475cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0325 02:23:42.365382  530227 system_pods.go:61] "kube-proxy-7cpjt" [6d1657ba-6fcd-4ee8-8293-b6aa0b7e1fb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0325 02:23:42.365387  530227 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220325020956-262786" [7b21b770-272f-4183-a1e4-6cca761e7be8] Running
	I0325 02:23:42.365395  530227 system_pods.go:61] "metrics-server-b955d9d8-h94qn" [f250996f-f9e2-41f2-ba86-6da05d627811] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365401  530227 system_pods.go:61] "storage-provisioner" [1f4e27b1-94bb-49ed-b16e-7237ce00c11a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0325 02:23:42.365409  530227 system_pods.go:74] duration metric: took 8.560724ms to wait for pod list to return data ...
	I0325 02:23:42.365419  530227 node_conditions.go:102] verifying NodePressure condition ...
	I0325 02:23:42.368395  530227 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0325 02:23:42.368426  530227 node_conditions.go:123] node cpu capacity is 8
	I0325 02:23:42.368439  530227 node_conditions.go:105] duration metric: took 3.013418ms to run NodePressure ...
	I0325 02:23:42.368460  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0325 02:23:42.498603  530227 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503044  530227 kubeadm.go:752] kubelet initialised
	I0325 02:23:42.503087  530227 kubeadm.go:753] duration metric: took 4.396508ms waiting for restarted kubelet to initialise ...
	I0325 02:23:42.503097  530227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:23:42.508446  530227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	I0325 02:23:44.514894  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:41.279719  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:43.779807  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:46.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:49.014836  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:46.279223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:48.279265  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:50.280221  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:51.514564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:54.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:52.780223  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:55.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:56.514871  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:59.014358  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:23:57.280104  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:23:59.779435  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:01.015007  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:03.514691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:01.779945  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:03.780076  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:05.515135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:08.014925  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:06.279495  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:08.280022  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:10.514744  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:12.514875  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:10.779769  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:13.279988  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:15.014427  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:17.514431  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:15.779111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:17.779860  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.282496  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:20.015198  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.514500  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:22.779392  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:24.779583  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:25.014188  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.015284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:29.515114  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:27.280129  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:29.779139  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:32.015294  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:34.514331  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:31.779438  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:34.279292  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:36.514446  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:39.014203  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:36.280233  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:38.779288  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:41.015081  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:43.515133  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:40.779876  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:43.279836  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:45.280111  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:46.014807  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:48.513848  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:47.779037  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:49.779225  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:50.514522  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:53.014610  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:52.279107  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:54.279992  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:55.514800  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:58.014633  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:24:56.280212  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:24:58.779953  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:00.514555  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:02.514600  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:04.514849  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:00.780066  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:03.279884  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:05.280246  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:07.014221  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:09.014509  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:07.780397  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:10.279278  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:11.014691  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:13.014798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:12.779414  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.279560  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:15.514210  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.514263  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:19.515014  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:17.779664  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:19.779727  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:22.014469  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:24.015322  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:22.279477  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:24.779885  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:26.514766  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:29.014967  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:27.279254  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:29.280083  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:31.514230  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:34.014655  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:31.779951  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:34.279813  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:36.279928  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282274  519649 node_ready.go:58] node "no-preload-20220325020326-262786" has status "Ready":"False"
	I0325 02:25:38.282298  519649 node_ready.go:38] duration metric: took 4m0.009544217s waiting for node "no-preload-20220325020326-262786" to be "Ready" ...
	I0325 02:25:38.285018  519649 out.go:176] 
	W0325 02:25:38.285266  519649 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:25:38.285284  519649 out.go:241] * 
	W0325 02:25:38.286304  519649 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0325 02:25:36.513926  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:38.514284  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:40.514364  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:42.514810  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:45.014814  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:47.016025  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:49.514217  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:51.514677  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:54.014149  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:56.014605  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:25:58.514592  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:01.014803  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:03.015044  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:05.514261  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:07.514811  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:10.014055  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:12.015163  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:14.514268  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:16.514780  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:19.014786  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:21.513928  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:23.514819  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:26.015135  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:28.514150  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:30.514433  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:32.514515  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:35.014162  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:37.014582  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:39.015192  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:41.514478  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:43.514682  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:46.014076  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:48.014564  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:50.015101  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:52.514545  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:55.014470  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:57.514259  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:26:59.514425  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:01.514567  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:03.514798  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:06.014824  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:08.015115  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:10.513891  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:12.514222  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:14.514644  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:17.014325  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:19.514620  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:22.014551  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:24.014603  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:26.015052  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:28.015506  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:30.514915  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:33.014783  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:35.514512  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:38.014733  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:40.514302  530227 pod_ready.go:102] pod "coredns-64897985d-9tgbz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-03-25 02:10:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0325 02:27:42.510816  530227 pod_ready.go:81] duration metric: took 4m0.002335219s waiting for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" ...
	E0325 02:27:42.510845  530227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9tgbz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0325 02:27:42.510866  530227 pod_ready.go:38] duration metric: took 4m0.007755725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0325 02:27:42.510971  530227 kubeadm.go:605] restartCluster took 4m16.034541089s
	W0325 02:27:42.511146  530227 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0325 02:27:42.511207  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0325 02:27:44.339219  530227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.827981438s)
	I0325 02:27:44.339290  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:27:44.348982  530227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0325 02:27:44.356461  530227 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0325 02:27:44.356520  530227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0325 02:27:44.363951  530227 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0325 02:27:44.364022  530227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0325 02:27:57.283699  530227 out.go:203]   - Generating certificates and keys ...
	I0325 02:27:57.286878  530227 out.go:203]   - Booting up control plane ...
	I0325 02:27:57.289872  530227 out.go:203]   - Configuring RBAC rules ...
	I0325 02:27:57.291696  530227 cni.go:93] Creating CNI manager for ""
	I0325 02:27:57.291719  530227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 02:27:57.293919  530227 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0325 02:27:57.294011  530227 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0325 02:27:57.297810  530227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0325 02:27:57.297833  530227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0325 02:27:57.312402  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0325 02:27:58.034457  530227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0325 02:27:58.034521  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786 minikube.k8s.io/updated_at=2022_03_25T02_27_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.034522  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.101247  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:58.118657  530227 ops.go:34] apiserver oom_adj: -16
	I0325 02:27:58.688158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.188855  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:27:59.688734  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.188158  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:00.688215  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.188912  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:01.688969  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.188876  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:02.688656  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.188835  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:03.688222  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.188154  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:04.688514  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.188103  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:05.688209  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.187993  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:06.688197  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.188677  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:07.688113  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.187906  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:08.688331  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.188315  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.688031  530227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0325 02:28:09.751277  530227 kubeadm.go:1020] duration metric: took 11.716819332s to wait for elevateKubeSystemPrivileges.
	I0325 02:28:09.751307  530227 kubeadm.go:393] StartCluster complete in 4m43.320221544s
	I0325 02:28:09.751334  530227 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:09.751483  530227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 02:28:09.752678  530227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0325 02:28:10.268555  530227 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220325020956-262786" rescaled to 1
	I0325 02:28:10.268633  530227 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0325 02:28:10.268674  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0325 02:28:10.270943  530227 out.go:176] * Verifying Kubernetes components...
	I0325 02:28:10.268968  530227 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0325 02:28:10.269163  530227 config.go:176] Loaded profile config "default-k8s-different-port-20220325020956-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 02:28:10.271075  530227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 02:28:10.271163  530227 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271166  530227 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271183  530227 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271188  530227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271189  530227 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271192  530227 addons.go:165] addon metrics-server should already be in state true
	I0325 02:28:10.271207  530227 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271217  530227 addons.go:165] addon dashboard should already be in state true
	I0325 02:28:10.271232  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271251  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271164  530227 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:10.271302  530227 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.271316  530227 addons.go:165] addon storage-provisioner should already be in state true
	I0325 02:28:10.271343  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.271538  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271708  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.271833  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.344040  530227 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.344132  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0325 02:28:10.344144  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0325 02:28:10.344219  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.346679  530227 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0325 02:28:10.346811  530227 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.346826  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0325 02:28:10.346882  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.353938  530227 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0325 02:28:10.355562  530227 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I0325 02:28:10.355640  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0325 02:28:10.355656  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0325 02:28:10.355719  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.383518  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.385223  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.391464  530227 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220325020956-262786"
	W0325 02:28:10.391493  530227 addons.go:165] addon default-storageclass should already be in state true
	I0325 02:28:10.391524  530227 host.go:66] Checking if "default-k8s-different-port-20220325020956-262786" exists ...
	I0325 02:28:10.392049  530227 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20220325020956-262786 --format={{.State.Status}}
	I0325 02:28:10.400074  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.437891  530227 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.437915  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0325 02:28:10.437962  530227 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220325020956-262786
	I0325 02:28:10.471205  530227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49594 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220325020956-262786/id_rsa Username:docker}
	I0325 02:28:10.494523  530227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:28:10.494562  530227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0325 02:28:10.609220  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0325 02:28:10.609589  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0325 02:28:10.609655  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0325 02:28:10.609633  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0325 02:28:10.609758  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0325 02:28:10.700787  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0325 02:28:10.700823  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0325 02:28:10.701805  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0325 02:28:10.701834  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0325 02:28:10.800343  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0325 02:28:10.800381  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0325 02:28:10.808905  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0325 02:28:10.810521  530227 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:10.810550  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0325 02:28:10.899094  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0325 02:28:10.899126  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0325 02:28:10.905212  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0325 02:28:11.003467  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0325 02:28:11.003501  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0325 02:28:11.102902  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0325 02:28:11.102933  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0325 02:28:11.203761  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0325 02:28:11.203793  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0325 02:28:11.210549  530227 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0325 02:28:11.294868  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0325 02:28:11.294905  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0325 02:28:11.407000  530227 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.407036  530227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0325 02:28:11.594579  530227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0325 02:28:11.993918  530227 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08865456s)
	I0325 02:28:11.994023  530227 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220325020956-262786"
	I0325 02:28:12.406348  530227 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0325 02:28:12.406384  530227 addons.go:417] enableAddons completed in 2.137426118s
	I0325 02:28:12.501452  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:15.001678  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:17.002236  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:19.501942  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:22.002181  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:24.002264  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:26.501667  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:29.002078  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:31.501585  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:34.001805  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:36.002041  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:38.002339  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:40.501697  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:43.001641  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:45.501691  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:48.001669  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:50.001733  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:52.001838  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:54.002159  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:56.501779  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:28:59.001558  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:01.002011  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:03.501394  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:06.001566  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:08.002174  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:10.501618  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:12.502189  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:15.005102  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:17.501055  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:19.501433  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:22.001489  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:24.001553  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:26.001738  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:28.501727  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:31.001897  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:33.002070  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:35.501917  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:38.001428  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:40.001672  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:42.002147  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:44.501313  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:47.001730  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:49.002193  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:51.501480  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:53.502170  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:56.001749  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:29:58.501848  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:01.001969  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:03.002165  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:05.501492  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:07.501983  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:10.001162  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:12.001919  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:14.001948  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:16.501197  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:18.502117  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:20.502225  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:23.002141  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:25.502083  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:28.001876  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:30.002027  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:32.503056  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:35.001423  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:37.501534  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:40.002190  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:42.502274  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:45.001432  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:47.001999  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:49.501359  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:52.001784  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:54.501845  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:56.502160  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:30:59.001872  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:01.501682  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:04.002218  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:06.002445  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:08.501995  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:10.502365  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:13.001863  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:15.002027  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:17.003337  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:19.501499  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:21.501688  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:24.001679  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:26.502090  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:29.001645  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:31.501716  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:33.501846  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:36.001006  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:38.002402  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:40.501392  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:42.502189  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:45.002082  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:47.500971  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:49.502314  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:52.001990  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:54.501597  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:57.001789  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:31:59.002549  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:01.003204  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:03.501518  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:05.502059  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:07.502226  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:10.001697  530227 node_ready.go:58] node "default-k8s-different-port-20220325020956-262786" has status "Ready":"False"
	I0325 02:32:10.504266  530227 node_ready.go:38] duration metric: took 4m0.009695209s waiting for node "default-k8s-different-port-20220325020956-262786" to be "Ready" ...
	I0325 02:32:10.507459  530227 out.go:176] 
	W0325 02:32:10.507629  530227 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0325 02:32:10.507649  530227 out.go:241] * 
	W0325 02:32:10.508449  530227 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	86b0a34ff2b74       6de166512aa22       51 seconds ago      Running             kindnet-cni               4                   a65827bfd7c9e
	6b6b967023d97       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   a65827bfd7c9e
	dd322dba64c8a       9b7cc99821098       13 minutes ago      Running             kube-proxy                0                   0ba6261aad033
	7be9a97449cc3       b07520cd7ab76       13 minutes ago      Running             kube-controller-manager   2                   a6aab84cb155b
	7e2801b636d95       f40be0088a83e       13 minutes ago      Running             kube-apiserver            2                   1df88ac29bb94
	1c45766f9b001       99a3486be4f28       13 minutes ago      Running             kube-scheduler            2                   779cc1c8f883d
	1918920313743       25f8c7f3da61c       13 minutes ago      Running             etcd                      2                   c959e762476d0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-03-25 02:23:10 UTC, end at Fri 2022-03-25 02:41:14 UTC. --
	Mar 25 02:33:33 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:33:33.848608517Z" level=info msg="RemoveContainer for \"6832a4d07d8b063f82a511540f31f2ca94f55751848ab44261defdbea45d4910\" returns successfully"
	Mar 25 02:33:48 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:33:48.216814547Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Mar 25 02:33:48 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:33:48.232729973Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"bbb8ec8d1ebb636601d680d5fa7c19bf15c0483a5a5da72bcf7f9cdc07e26d8c\""
	Mar 25 02:33:48 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:33:48.233369642Z" level=info msg="StartContainer for \"bbb8ec8d1ebb636601d680d5fa7c19bf15c0483a5a5da72bcf7f9cdc07e26d8c\""
	Mar 25 02:33:48 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:33:48.304650727Z" level=info msg="StartContainer for \"bbb8ec8d1ebb636601d680d5fa7c19bf15c0483a5a5da72bcf7f9cdc07e26d8c\" returns successfully"
	Mar 25 02:36:28 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:28.621471814Z" level=info msg="shim disconnected" id=bbb8ec8d1ebb636601d680d5fa7c19bf15c0483a5a5da72bcf7f9cdc07e26d8c
	Mar 25 02:36:28 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:28.621556109Z" level=warning msg="cleaning up after shim disconnected" id=bbb8ec8d1ebb636601d680d5fa7c19bf15c0483a5a5da72bcf7f9cdc07e26d8c namespace=k8s.io
	Mar 25 02:36:28 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:28.621569974Z" level=info msg="cleaning up dead shim"
	Mar 25 02:36:28 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:28.631960494Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:36:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4252\n"
	Mar 25 02:36:29 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:29.141714674Z" level=info msg="RemoveContainer for \"202a86cfbfa5d0e2d8a73cb579305615cb704a24e591bc20e2b7ed4742958cef\""
	Mar 25 02:36:29 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:29.146108110Z" level=info msg="RemoveContainer for \"202a86cfbfa5d0e2d8a73cb579305615cb704a24e591bc20e2b7ed4742958cef\" returns successfully"
	Mar 25 02:36:58 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:58.216785213Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Mar 25 02:36:58 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:58.230529566Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"6b6b967023d97d141b1188d789b29f0024ebbde651ed50b965b928c2d85a3df8\""
	Mar 25 02:36:58 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:58.231062099Z" level=info msg="StartContainer for \"6b6b967023d97d141b1188d789b29f0024ebbde651ed50b965b928c2d85a3df8\""
	Mar 25 02:36:58 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:36:58.388718772Z" level=info msg="StartContainer for \"6b6b967023d97d141b1188d789b29f0024ebbde651ed50b965b928c2d85a3df8\" returns successfully"
	Mar 25 02:39:38 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:39:38.626968607Z" level=info msg="shim disconnected" id=6b6b967023d97d141b1188d789b29f0024ebbde651ed50b965b928c2d85a3df8
	Mar 25 02:39:38 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:39:38.627099470Z" level=warning msg="cleaning up after shim disconnected" id=6b6b967023d97d141b1188d789b29f0024ebbde651ed50b965b928c2d85a3df8 namespace=k8s.io
	Mar 25 02:39:38 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:39:38.627125587Z" level=info msg="cleaning up dead shim"
	Mar 25 02:39:38 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:39:38.636980038Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:39:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4357\n"
	Mar 25 02:39:39 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:39:39.472695542Z" level=info msg="RemoveContainer for \"bbb8ec8d1ebb636601d680d5fa7c19bf15c0483a5a5da72bcf7f9cdc07e26d8c\""
	Mar 25 02:39:39 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:39:39.477390074Z" level=info msg="RemoveContainer for \"bbb8ec8d1ebb636601d680d5fa7c19bf15c0483a5a5da72bcf7f9cdc07e26d8c\" returns successfully"
	Mar 25 02:40:22 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:40:22.215765875Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Mar 25 02:40:22 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:40:22.229181689Z" level=info msg="CreateContainer within sandbox \"a65827bfd7c9e3dcf303b05f51ed6166192a17b00a8bf8133576a04d3decd6da\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"86b0a34ff2b74c22de2cf854786ae14484f32883f1acff3f53e3785f8ae243f8\""
	Mar 25 02:40:22 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:40:22.229660112Z" level=info msg="StartContainer for \"86b0a34ff2b74c22de2cf854786ae14484f32883f1acff3f53e3785f8ae243f8\""
	Mar 25 02:40:22 default-k8s-different-port-20220325020956-262786 containerd[345]: time="2022-03-25T02:40:22.388667424Z" level=info msg="StartContainer for \"86b0a34ff2b74c22de2cf854786ae14484f32883f1acff3f53e3785f8ae243f8\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220325020956-262786
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220325020956-262786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
	                    minikube.k8s.io/name=default-k8s-different-port-20220325020956-262786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_25T02_27_58_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Mar 2022 02:27:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220325020956-262786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Mar 2022 02:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Mar 2022 02:38:25 +0000   Fri, 25 Mar 2022 02:27:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Mar 2022 02:38:25 +0000   Fri, 25 Mar 2022 02:27:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Mar 2022 02:38:25 +0000   Fri, 25 Mar 2022 02:27:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 25 Mar 2022 02:38:25 +0000   Fri, 25 Mar 2022 02:27:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220325020956-262786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                3d34c106-4e48-46f4-9bcf-ea4602321294
	  Boot ID:                    63fce5d9-a30b-498a-bfed-7dd46d23a363
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.3
	  Kube-Proxy Version:         v1.23.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220325020956-262786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-dgbq7                                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220325020956-262786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220325020956-262786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-rfd9g                                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220325020956-262786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 13m   kube-proxy  
	  Normal  Starting                 13m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet     Node default-k8s-different-port-20220325020956-262786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +1.011896] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.023877] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +0.953086] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf4b51852
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 70 36 f8 5f b0 08 06
	[  +0.031950] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6535462d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a c6 0e 0e 23 49 08 06
	[  +0.644934] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth6535462d
	[  +0.401878] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth5b52bbbf
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 37 c2 ed 50 67 08 06
	[  +0.935995] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.035860] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[  +1.019942] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 da 38 61 1e 1f 08 06
	[Mar25 02:14] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3401b1e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 e9 ba cf fb f8 08 06
	[  +0.179199] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha9eb2fdf
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 32 f8 c1 5c 31 f0 08 06
	[  +0.564272] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc1de7e82
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d1 71 83 67 99 08 06
	[  +0.295714] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth71c4bd69
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 35 ee 14 12 82 08 06
	
	* 
	* ==> etcd [1918920313743155ab9ccfc3912db66df70045dd4f71848f6def6ce3db51955e] <==
	* {"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-25T02:27:51.687Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-25T02:27:52.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220325020956-262786 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:27:52.408Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-25T02:27:52.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-03-25T02:27:52.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-25T02:37:52.424Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":643}
	{"level":"info","ts":"2022-03-25T02:37:52.425Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":643,"took":"610.306µs"}
	
	* 
	* ==> kernel <==
	*  02:41:14 up  5:19,  0 users,  load average: 0.02, 0.12, 0.40
	Linux default-k8s-different-port-20220325020956-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [7e2801b636d950a93f789ebef4b0df032b145af6b25620f294aeeeb2684410b5] <==
	* I0325 02:31:12.802694       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:32:55.292853       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:32:55.292945       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:32:55.292956       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:33:55.293365       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:33:55.293440       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:33:55.293449       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:35:55.294510       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:35:55.294600       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:35:55.294622       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:37:55.300766       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:37:55.300850       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:37:55.300867       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:38:55.301633       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:38:55.301708       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:38:55.301718       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0325 02:40:55.302729       1 handler_proxy.go:104] no RequestInfo found in the context
	E0325 02:40:55.302822       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0325 02:40:55.302830       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7be9a97449cc36584579a06b845d0efa759c0b852c6a7e736172c2131a3e29ce] <==
	* W0325 02:35:10.346005       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:35:39.916713       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:35:40.362227       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:36:09.930238       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:36:10.377279       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:36:39.938558       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:36:40.392897       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:37:09.949997       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:37:10.408610       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:37:39.961709       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:37:40.421304       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:38:09.972638       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:38:10.437763       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:38:39.984028       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:38:40.451906       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:39:09.994352       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:39:10.467847       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:39:40.003938       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:39:40.482080       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:40:10.026392       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:40:10.498900       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:40:40.049273       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:40:40.513403       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0325 02:41:10.062167       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0325 02:41:10.529295       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [dd322dba64c8aa7660220ec9853be1f48708b62522e445198ad60efe62d029ff] <==
	* I0325 02:28:11.496212       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0325 02:28:11.496289       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0325 02:28:11.496332       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0325 02:28:11.686587       1 server_others.go:206] "Using iptables Proxier"
	I0325 02:28:11.686649       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0325 02:28:11.686659       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0325 02:28:11.686690       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0325 02:28:11.687288       1 server.go:656] "Version info" version="v1.23.3"
	I0325 02:28:11.687967       1 config.go:317] "Starting service config controller"
	I0325 02:28:11.688037       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0325 02:28:11.688010       1 config.go:226] "Starting endpoint slice config controller"
	I0325 02:28:11.688180       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0325 02:28:11.788226       1 shared_informer.go:247] Caches are synced for service config 
	I0325 02:28:11.788263       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [1c45766f9b001a935c9e0207523f5e109bc199d1764e28b1815a1ffda87f035c] <==
	* W0325 02:27:54.394360       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:27:54.394430       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:27:54.394445       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0325 02:27:54.394431       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:27:54.394235       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0325 02:27:54.394542       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0325 02:27:54.394596       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:54.394627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:27:54.394685       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0325 02:27:54.394717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0325 02:27:54.394829       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0325 02:27:54.394898       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0325 02:27:54.397264       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0325 02:27:54.397299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0325 02:27:55.258388       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:55.258426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0325 02:27:55.258881       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:55.258928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0325 02:27:55.289679       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0325 02:27:55.289719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0325 02:27:55.455646       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:55.455682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0325 02:27:55.474784       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0325 02:27:55.474827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0325 02:27:55.790840       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-03-25 02:23:10 UTC, end at Fri 2022-03-25 02:41:14 UTC. --
	Mar 25 02:39:39 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:39:39.472236    2893 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-dgbq7_kube-system(24f771e6-9a01-4bd1-9da1-2846eb0f9852)\"" pod="kube-system/kindnet-dgbq7" podUID=24f771e6-9a01-4bd1-9da1-2846eb0f9852
	Mar 25 02:39:42 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:39:42.591565    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:39:47 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:39:47.592731    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:39:52 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:39:52.594132    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:39:54 default-k8s-different-port-20220325020956-262786 kubelet[2893]: I0325 02:39:54.213615    2893 scope.go:110] "RemoveContainer" containerID="6b6b967023d97d141b1188d789b29f0024ebbde651ed50b965b928c2d85a3df8"
	Mar 25 02:39:54 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:39:54.214053    2893 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-dgbq7_kube-system(24f771e6-9a01-4bd1-9da1-2846eb0f9852)\"" pod="kube-system/kindnet-dgbq7" podUID=24f771e6-9a01-4bd1-9da1-2846eb0f9852
	Mar 25 02:39:57 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:39:57.594880    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:02 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:02.596137    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:07 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:07.597471    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:08 default-k8s-different-port-20220325020956-262786 kubelet[2893]: I0325 02:40:08.214026    2893 scope.go:110] "RemoveContainer" containerID="6b6b967023d97d141b1188d789b29f0024ebbde651ed50b965b928c2d85a3df8"
	Mar 25 02:40:08 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:08.214314    2893 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-dgbq7_kube-system(24f771e6-9a01-4bd1-9da1-2846eb0f9852)\"" pod="kube-system/kindnet-dgbq7" podUID=24f771e6-9a01-4bd1-9da1-2846eb0f9852
	Mar 25 02:40:12 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:12.599003    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:17 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:17.600691    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:22 default-k8s-different-port-20220325020956-262786 kubelet[2893]: I0325 02:40:22.213348    2893 scope.go:110] "RemoveContainer" containerID="6b6b967023d97d141b1188d789b29f0024ebbde651ed50b965b928c2d85a3df8"
	Mar 25 02:40:22 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:22.602132    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:27 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:27.603511    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:32 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:32.604830    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:37 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:37.606378    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:42 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:42.607620    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:47 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:47.608741    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:52 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:52.609834    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:40:57 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:40:57.611317    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:41:02 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:41:02.612721    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:41:07 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:41:07.614179    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 25 02:41:12 default-k8s-different-port-20220325020956-262786 kubelet[2893]: E0325 02:41:12.615643    2893 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-hbhkk metrics-server-b955d9d8-689fn storage-provisioner dashboard-metrics-scraper-56974995fc-hm5lc kubernetes-dashboard-ccd587f44-ghrlb
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-hbhkk metrics-server-b955d9d8-689fn storage-provisioner dashboard-metrics-scraper-56974995fc-hm5lc kubernetes-dashboard-ccd587f44-ghrlb
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-hbhkk metrics-server-b955d9d8-689fn storage-provisioner dashboard-metrics-scraper-56974995fc-hm5lc kubernetes-dashboard-ccd587f44-ghrlb: exit status 1 (68.600849ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-hbhkk" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-689fn" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-hm5lc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-ghrlb" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20220325020956-262786 describe pod coredns-64897985d-hbhkk metrics-server-b955d9d8-689fn storage-provisioner dashboard-metrics-scraper-56974995fc-hm5lc kubernetes-dashboard-ccd587f44-ghrlb: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.49s)

                                                
                                    

Test pass (225/267)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.05
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.23.3/json-events 5.22
11 TestDownloadOnly/v1.23.3/preload-exists 0
15 TestDownloadOnly/v1.23.3/LogsDuration 0.07
17 TestDownloadOnly/v1.23.4-rc.0/json-events 6.16
18 TestDownloadOnly/v1.23.4-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.4-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.31
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
25 TestDownloadOnlyKic 7.87
26 TestBinaryMirror 0.84
27 TestOffline 96.31
29 TestAddons/Setup 158.59
31 TestAddons/parallel/Registry 16.21
32 TestAddons/parallel/Ingress 21.63
33 TestAddons/parallel/MetricsServer 5.53
34 TestAddons/parallel/HelmTiller 12.11
36 TestAddons/parallel/CSI 46.31
38 TestAddons/serial/GCPAuth 38.3
39 TestAddons/StoppedEnableDisable 20.31
40 TestCertOptions 53.18
41 TestCertExpiration 243.42
43 TestForceSystemdFlag 63.25
44 TestForceSystemdEnv 72.35
45 TestKVMDriverInstallOrUpdate 1.93
49 TestErrorSpam/setup 41.29
50 TestErrorSpam/start 0.89
51 TestErrorSpam/status 1.1
52 TestErrorSpam/pause 2.13
53 TestErrorSpam/unpause 1.53
54 TestErrorSpam/stop 14.99
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 58.63
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 15.41
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.18
65 TestFunctional/serial/CacheCmd/cache/add_remote 3.27
66 TestFunctional/serial/CacheCmd/cache/add_local 0.98
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
70 TestFunctional/serial/CacheCmd/cache/cache_reload 2.12
71 TestFunctional/serial/CacheCmd/cache/delete 0.12
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
74 TestFunctional/serial/ExtraConfig 41.58
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 1.06
77 TestFunctional/serial/LogsFileCmd 1.08
79 TestFunctional/parallel/ConfigCmd 0.49
80 TestFunctional/parallel/DashboardCmd 2.47
81 TestFunctional/parallel/DryRun 0.58
82 TestFunctional/parallel/InternationalLanguage 0.21
83 TestFunctional/parallel/StatusCmd 1.26
86 TestFunctional/parallel/ServiceCmd 11.44
87 TestFunctional/parallel/ServiceCmdConnect 10.75
88 TestFunctional/parallel/AddonsCmd 0.17
89 TestFunctional/parallel/PersistentVolumeClaim 26.99
91 TestFunctional/parallel/SSHCmd 0.84
92 TestFunctional/parallel/CpCmd 1.31
93 TestFunctional/parallel/MySQL 20.69
94 TestFunctional/parallel/FileSync 0.42
95 TestFunctional/parallel/CertSync 2.5
99 TestFunctional/parallel/NodeLabels 0.06
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.83
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
104 TestFunctional/parallel/Version/short 0.08
105 TestFunctional/parallel/Version/components 1.31
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
109 TestFunctional/parallel/ImageCommands/ImageBuild 2.52
110 TestFunctional/parallel/ImageCommands/Setup 0.98
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.73
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.17
116 TestFunctional/parallel/ProfileCmd/profile_list 0.51
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.21
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.74
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.24
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.98
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.34
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.15
134 TestFunctional/parallel/MountCmd/any-port 6.61
135 TestFunctional/parallel/MountCmd/specific-port 2.2
136 TestFunctional/delete_addon-resizer_images 0.1
137 TestFunctional/delete_my-image_image 0.03
138 TestFunctional/delete_minikube_cached_images 0.03
141 TestIngressAddonLegacy/StartLegacyK8sCluster 115.25
143 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.14
144 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.37
145 TestIngressAddonLegacy/serial/ValidateIngressAddons 33.54
148 TestJSONOutput/start/Command 67.56
149 TestJSONOutput/start/Audit 0
151 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/pause/Command 0.7
155 TestJSONOutput/pause/Audit 0
157 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/unpause/Command 0.62
161 TestJSONOutput/unpause/Audit 0
163 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/stop/Command 15.71
167 TestJSONOutput/stop/Audit 0
169 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
171 TestErrorJSONOutput 0.28
173 TestKicCustomNetwork/create_custom_network 35.35
174 TestKicCustomNetwork/use_default_bridge_network 28.5
175 TestKicExistingNetwork 28.48
176 TestMainNoArgs 0.06
179 TestMountStart/serial/StartWithMountFirst 4.7
180 TestMountStart/serial/VerifyMountFirst 0.33
181 TestMountStart/serial/StartWithMountSecond 4.62
182 TestMountStart/serial/VerifyMountSecond 0.33
183 TestMountStart/serial/DeleteFirst 1.86
184 TestMountStart/serial/VerifyMountPostDelete 0.33
185 TestMountStart/serial/Stop 1.26
186 TestMountStart/serial/RestartStopped 6.38
187 TestMountStart/serial/VerifyMountPostStop 0.32
190 TestMultiNode/serial/FreshStart2Nodes 103.98
191 TestMultiNode/serial/DeployApp2Nodes 3.49
192 TestMultiNode/serial/PingHostFrom2Pods 0.8
193 TestMultiNode/serial/AddNode 43.11
194 TestMultiNode/serial/ProfileList 0.36
195 TestMultiNode/serial/CopyFile 11.83
196 TestMultiNode/serial/StopNode 6.93
197 TestMultiNode/serial/StartAfterStop 36.11
198 TestMultiNode/serial/RestartKeepsNodes 190.91
199 TestMultiNode/serial/DeleteNode 9.82
200 TestMultiNode/serial/StopMultiNode 40.35
201 TestMultiNode/serial/RestartMultiNode 117.54
202 TestMultiNode/serial/ValidateNameConflict 44.67
207 TestPreload 114.24
209 TestScheduledStopUnix 118.55
212 TestInsufficientStorage 18.63
213 TestRunningBinaryUpgrade 301.02
215 TestKubernetesUpgrade 186.3
216 TestMissingContainerUpgrade 111.3
218 TestStoppedBinaryUpgrade/Setup 0.5
219 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
220 TestNoKubernetes/serial/StartWithK8s 67.31
221 TestStoppedBinaryUpgrade/Upgrade 115.73
222 TestNoKubernetes/serial/StartWithStopK8s 24.77
223 TestNoKubernetes/serial/Start 4.77
224 TestNoKubernetes/serial/VerifyK8sNotRunning 0.46
225 TestNoKubernetes/serial/ProfileList 1.42
226 TestNoKubernetes/serial/Stop 6.24
227 TestNoKubernetes/serial/StartNoArgs 5.65
228 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
229 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
237 TestNetworkPlugins/group/false 0.73
249 TestPause/serial/Start 70.67
250 TestPause/serial/SecondStartNoReconfiguration 15.66
251 TestPause/serial/Pause 0.73
252 TestPause/serial/VerifyStatus 0.39
253 TestPause/serial/Unpause 0.7
254 TestPause/serial/PauseAgain 5.42
255 TestNetworkPlugins/group/auto/Start 59.87
256 TestPause/serial/DeletePaused 9.84
257 TestPause/serial/VerifyDeletedResources 0.89
260 TestNetworkPlugins/group/cilium/Start 90.99
261 TestNetworkPlugins/group/auto/KubeletFlags 0.71
262 TestNetworkPlugins/group/auto/NetCatPod 10.25
263 TestNetworkPlugins/group/auto/DNS 0.14
264 TestNetworkPlugins/group/auto/Localhost 0.15
265 TestNetworkPlugins/group/auto/HairPin 0.14
268 TestNetworkPlugins/group/cilium/ControllerPod 5.02
269 TestNetworkPlugins/group/cilium/KubeletFlags 0.36
270 TestNetworkPlugins/group/cilium/NetCatPod 11
271 TestNetworkPlugins/group/cilium/DNS 0.13
272 TestNetworkPlugins/group/cilium/Localhost 0.14
273 TestNetworkPlugins/group/cilium/HairPin 0.14
274 TestNetworkPlugins/group/kindnet/Start 71.38
275 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
276 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
277 TestNetworkPlugins/group/kindnet/NetCatPod 8.22
280 TestNetworkPlugins/group/enable-default-cni/Start 59.52
281 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
282 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
284 TestNetworkPlugins/group/bridge/Start 57.55
287 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
288 TestNetworkPlugins/group/bridge/NetCatPod 9.33
291 TestStartStop/group/embed-certs/serial/FirstStart 59.23
293 TestStartStop/group/embed-certs/serial/DeployApp 9.27
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.7
295 TestStartStop/group/embed-certs/serial/Stop 20.2
296 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
297 TestStartStop/group/embed-certs/serial/SecondStart 324.74
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.57
299 TestStartStop/group/old-k8s-version/serial/Stop 5.91
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
304 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
305 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
306 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
307 TestStartStop/group/embed-certs/serial/Pause 3.14
310 TestStartStop/group/newest-cni/serial/FirstStart 54.92
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.65
313 TestStartStop/group/newest-cni/serial/Stop 20.18
314 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
315 TestStartStop/group/newest-cni/serial/SecondStart 34.72
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.62
317 TestStartStop/group/no-preload/serial/Stop 10.06
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
323 TestStartStop/group/newest-cni/serial/Pause 2.93
325 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.62
326 TestStartStop/group/default-k8s-different-port/serial/Stop 9.48
327 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.21
x
+
TestDownloadOnly/v1.16.0/json-events (5.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220325011755-262786 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220325011755-262786 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.048641348s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220325011755-262786
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220325011755-262786: exit status 85 (73.460688ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 01:17:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220325011755-262786"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/json-events (5.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220325011755-262786 --force --alsologtostderr --kubernetes-version=v1.23.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220325011755-262786 --force --alsologtostderr --kubernetes-version=v1.23.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.21950864s)
--- PASS: TestDownloadOnly/v1.23.3/json-events (5.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/preload-exists
--- PASS: TestDownloadOnly/v1.23.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220325011755-262786
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220325011755-262786: exit status 85 (74.115028ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 01:18:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 01:18:00.913328  262945 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:18:00.913429  262945 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:18:00.913433  262945 out.go:310] Setting ErrFile to fd 2...
	I0325 01:18:00.913436  262945 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:18:00.913527  262945 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	W0325 01:18:00.913641  262945 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: no such file or directory
	I0325 01:18:00.913757  262945 out.go:304] Setting JSON to true
	I0325 01:18:00.914548  262945 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14153,"bootTime":1648156928,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:18:00.914619  262945 start.go:125] virtualization: kvm guest
	I0325 01:18:00.917374  262945 notify.go:193] Checking for updates...
	I0325 01:18:00.919365  262945 config.go:176] Loaded profile config "download-only-20220325011755-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0325 01:18:00.919459  262945 start.go:709] api.Load failed for download-only-20220325011755-262786: filestore "download-only-20220325011755-262786": Docker machine "download-only-20220325011755-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0325 01:18:00.919507  262945 driver.go:346] Setting default libvirt URI to qemu:///system
	W0325 01:18:00.919531  262945 start.go:709] api.Load failed for download-only-20220325011755-262786: filestore "download-only-20220325011755-262786": Docker machine "download-only-20220325011755-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0325 01:18:00.955745  262945 docker.go:136] docker version: linux-20.10.14
	I0325 01:18:00.955816  262945 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:18:01.044073  262945 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-25 01:18:00.980976114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:18:01.044175  262945 docker.go:253] overlay module found
	I0325 01:18:01.046392  262945 start.go:284] selected driver: docker
	I0325 01:18:01.046408  262945 start.go:801] validating driver "docker" against &{Name:download-only-20220325011755-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220325011755-262786 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false}
	I0325 01:18:01.046640  262945 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:18:01.143616  262945 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-25 01:18:01.072517018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:18:01.144129  262945 cni.go:93] Creating CNI manager for ""
	I0325 01:18:01.144145  262945 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 01:18:01.144157  262945 start_flags.go:304] config:
	{Name:download-only-20220325011755-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:download-only-20220325011755-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:18:01.146315  262945 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 01:18:01.147895  262945 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:18:01.147940  262945 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 01:18:01.177684  262945 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 01:18:01.177707  262945 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 01:18:01.185408  262945 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 01:18:01.185443  262945 cache.go:57] Caching tarball of preloaded images
	I0325 01:18:01.185752  262945 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:18:01.188039  262945 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 ...
	I0325 01:18:01.238018  262945 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:f76421332c21b49eda3ceac1a979d1f9 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
	I0325 01:18:04.418914  262945 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 ...
	I0325 01:18:04.419028  262945 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 ...
	I0325 01:18:05.515741  262945 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on containerd
	I0325 01:18:05.515889  262945 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/download-only-20220325011755-262786/config.json ...
	I0325 01:18:05.516106  262945 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
	I0325 01:18:05.516358  262945 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.23.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220325011755-262786"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/json-events (6.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220325011755-262786 --force --alsologtostderr --kubernetes-version=v1.23.4-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220325011755-262786 --force --alsologtostderr --kubernetes-version=v1.23.4-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.157688076s)
--- PASS: TestDownloadOnly/v1.23.4-rc.0/json-events (6.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.4-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220325011755-262786
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220325011755-262786: exit status 85 (74.366814ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/25 01:18:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0325 01:18:06.208437  263089 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:18:06.208542  263089 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:18:06.208553  263089 out.go:310] Setting ErrFile to fd 2...
	I0325 01:18:06.208558  263089 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:18:06.208660  263089 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	W0325 01:18:06.208774  263089 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: no such file or directory
	I0325 01:18:06.208895  263089 out.go:304] Setting JSON to true
	I0325 01:18:06.209732  263089 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14158,"bootTime":1648156928,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:18:06.209797  263089 start.go:125] virtualization: kvm guest
	I0325 01:18:06.212470  263089 notify.go:193] Checking for updates...
	I0325 01:18:06.214725  263089 config.go:176] Loaded profile config "download-only-20220325011755-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	W0325 01:18:06.214782  263089 start.go:709] api.Load failed for download-only-20220325011755-262786: filestore "download-only-20220325011755-262786": Docker machine "download-only-20220325011755-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0325 01:18:06.214830  263089 driver.go:346] Setting default libvirt URI to qemu:///system
	W0325 01:18:06.214859  263089 start.go:709] api.Load failed for download-only-20220325011755-262786: filestore "download-only-20220325011755-262786": Docker machine "download-only-20220325011755-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0325 01:18:06.251882  263089 docker.go:136] docker version: linux-20.10.14
	I0325 01:18:06.251975  263089 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:18:06.337077  263089 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-25 01:18:06.276557424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:18:06.337207  263089 docker.go:253] overlay module found
	I0325 01:18:06.339842  263089 start.go:284] selected driver: docker
	I0325 01:18:06.339868  263089 start.go:801] validating driver "docker" against &{Name:download-only-20220325011755-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:download-only-20220325011755-262786 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false}
	I0325 01:18:06.340184  263089 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:18:06.422843  263089 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-25 01:18:06.366659339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:18:06.423474  263089 cni.go:93] Creating CNI manager for ""
	I0325 01:18:06.423490  263089 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0325 01:18:06.423507  263089 start_flags.go:304] config:
	{Name:download-only-20220325011755-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:download-only-20220325011755-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:18:06.425994  263089 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0325 01:18:06.427676  263089 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 01:18:06.427779  263089 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0325 01:18:06.457043  263089 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0325 01:18:06.457067  263089 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0325 01:18:06.471840  263089 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4-rc.0/preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4
	I0325 01:18:06.471857  263089 cache.go:57] Caching tarball of preloaded images
	I0325 01:18:06.472193  263089 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime containerd
	I0325 01:18:06.474415  263089 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0325 01:18:06.505713  263089 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4-rc.0/preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:fe92d3694b9dfc0a6db9d069bd338fe1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4
	I0325 01:18:10.694339  263089 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0325 01:18:10.694437  263089 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220325011755-262786"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.4-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220325011755-262786
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.87s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220325011813-262786 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:230: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220325011813-262786 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (6.553108593s)
helpers_test.go:176: Cleaning up "download-docker-20220325011813-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220325011813-262786
--- PASS: TestDownloadOnlyKic (7.87s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220325011820-262786 --alsologtostderr --binary-mirror http://127.0.0.1:34277 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-20220325011820-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220325011820-262786
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (96.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220325014714-262786 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220325014714-262786 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m33.086094975s)
helpers_test.go:176: Cleaning up "offline-containerd-20220325014714-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220325014714-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220325014714-262786: (3.224314705s)
--- PASS: TestOffline (96.31s)

                                                
                                    
x
+
TestAddons/Setup (158.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220325011821-262786 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220325011821-262786 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m38.591289929s)
--- PASS: TestAddons/Setup (158.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 10.471054ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-r4dph" [733043f4-5417-478c-94d5-d8a5701f0950] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009660841s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-8gck9" [98b311d1-0ac3-4727-b73d-aed3a97da2e6] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006799376s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220325011821-262786 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220325011821-262786 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220325011821-262786 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.480205741s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 ip
2022/03/25 01:21:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220325011821-262786 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220325011821-262786 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:183: (dbg) Done: kubectl --context addons-20220325011821-262786 replace --force -f testdata/nginx-ingress-v1.yaml: (1.729960553s)
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220325011821-262786 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [26d2127f-37a6-44da-8e17-6a182c01802a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [26d2127f-37a6-44da-8e17-6a182c01802a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.200496469s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context addons-20220325011821-262786 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable ingress --alsologtostderr -v=1: (7.56409867s)
--- PASS: TestAddons/parallel/Ingress (21.63s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 10.494431ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-bd6f4dd56-t6z99" [ae4fb88c-dcf0-4fc8-8eeb-b60b32bb9104] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00961382s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220325011821-262786 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.53s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.11s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 10.310921ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-strc7" [314de3e2-6888-44a0-90c9-fe2debf62794] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009822095s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220325011821-262786 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220325011821-262786 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.504611056s)
addons_test.go:429: kubectl --context addons-20220325011821-262786 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220325011821-262786 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:424: (dbg) Done: kubectl --context addons-20220325011821-262786 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (1.333025381s)
addons_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 12.153181ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220325011821-262786 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220325011821-262786 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220325011821-262786 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [590103f5-c170-4a4d-b8a9-3cfd1be8574a] Pending
helpers_test.go:343: "task-pv-pod" [590103f5-c170-4a4d-b8a9-3cfd1be8574a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [590103f5-c170-4a4d-b8a9-3cfd1be8574a] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.006690499s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220325011821-262786 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220325011821-262786 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220325011821-262786 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220325011821-262786 delete pod task-pv-pod
addons_test.go:545: (dbg) Done: kubectl --context addons-20220325011821-262786 delete pod task-pv-pod: (1.113888048s)
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220325011821-262786 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220325011821-262786 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220325011821-262786 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220325011821-262786 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [21ffccd5-0371-4bf6-8c3e-c90c3bf0c784] Pending
helpers_test.go:343: "task-pv-pod-restore" [21ffccd5-0371-4bf6-8c3e-c90c3bf0c784] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [21ffccd5-0371-4bf6-8c3e-c90c3bf0c784] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.006224812s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220325011821-262786 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220325011821-262786 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220325011821-262786 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.905268428s)
addons_test.go:593: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (38.3s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220325011821-262786 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [cb99a4b0-6149-4c09-9d83-5c883020e59f] Pending
helpers_test.go:343: "busybox" [cb99a4b0-6149-4c09-9d83-5c883020e59f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [cb99a4b0-6149-4c09-9d83-5c883020e59f] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 7.005944175s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220325011821-262786 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220325011821-262786 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20220325011821-262786 addons disable gcp-auth --alsologtostderr -v=1: (5.985067562s)
addons_test.go:682: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220325011821-262786 addons enable gcp-auth
addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20220325011821-262786 addons enable gcp-auth: (2.948050246s)
addons_test.go:688: (dbg) Run:  kubectl --context addons-20220325011821-262786 apply -f testdata/private-image.yaml
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7f8587d5b7-64gts" [20e4c1b4-14ae-4082-b7d2-536dced9638d] Pending
helpers_test.go:343: "private-image-7f8587d5b7-64gts" [20e4c1b4-14ae-4082-b7d2-536dced9638d] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:343: "private-image-7f8587d5b7-64gts" [20e4c1b4-14ae-4082-b7d2-536dced9638d] Running
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 12.007249707s
addons_test.go:701: (dbg) Run:  kubectl --context addons-20220325011821-262786 apply -f testdata/private-image-eu.yaml
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-869dcfd8c7-hgnb8" [390b7f04-aa3a-4267-9ab4-5cd4e3497001] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:343: "private-image-eu-869dcfd8c7-hgnb8" [390b7f04-aa3a-4267-9ab4-5cd4e3497001] Running
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 9.005133065s
--- PASS: TestAddons/serial/GCPAuth (38.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220325011821-262786
addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220325011821-262786: (20.113966296s)
addons_test.go:137: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220325011821-262786
addons_test.go:141: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220325011821-262786
--- PASS: TestAddons/StoppedEnableDisable (20.31s)

                                                
                                    
x
+
TestCertOptions (53.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220325014907-262786 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220325014907-262786 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (43.784149689s)
cert_options_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220325014907-262786 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:89: (dbg) Run:  kubectl --context cert-options-20220325014907-262786 config view
cert_options_test.go:101: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220325014907-262786 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220325014907-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220325014907-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220325014907-262786: (8.619569669s)
--- PASS: TestCertOptions (53.18s)

                                                
                                    
x
+
TestCertExpiration (243.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220325014851-262786 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220325014851-262786 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (45.111894048s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220325014851-262786 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220325014851-262786 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.897493069s)
helpers_test.go:176: Cleaning up "cert-expiration-20220325014851-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220325014851-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220325014851-262786: (3.413567047s)
--- PASS: TestCertExpiration (243.42s)

                                                
                                    
x
+
TestForceSystemdFlag (63.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220325014827-262786 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220325014827-262786 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (55.373071905s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220325014827-262786 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20220325014827-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220325014827-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220325014827-262786: (7.492661585s)
--- PASS: TestForceSystemdFlag (63.25s)

                                                
                                    
x
+
TestForceSystemdEnv (72.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220325014714-262786 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220325014714-262786 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m9.292597061s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220325014714-262786 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20220325014714-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220325014714-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220325014714-262786: (2.688318247s)
--- PASS: TestForceSystemdEnv (72.35s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.93s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.93s)

                                                
                                    
x
+
TestErrorSpam/setup (41.29s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220325012251-262786 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220325012251-262786 --driver=docker  --container-runtime=containerd
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220325012251-262786 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220325012251-262786 --driver=docker  --container-runtime=containerd: (41.284878589s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (41.29s)

                                                
                                    
x
+
TestErrorSpam/start (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 start --dry-run
--- PASS: TestErrorSpam/start (0.89s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (2.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 pause
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 pause: (1.167438759s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 pause
--- PASS: TestErrorSpam/pause (2.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (14.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 stop
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 stop: (14.724262293s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220325012251-262786 --log_dir /tmp/nospam-20220325012251-262786 stop
--- PASS: TestErrorSpam/stop (14.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1796: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/test/nested/copy/262786/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2178: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220325012358-262786 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2178: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220325012358-262786 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (58.629573213s)
--- PASS: TestFunctional/serial/StartWithProxy (58.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:656: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220325012358-262786 --alsologtostderr -v=8
functional_test.go:656: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220325012358-262786 --alsologtostderr -v=8: (15.408710216s)
functional_test.go:660: soft start took 15.409430978s for "functional-20220325012358-262786" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:678: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:693: (dbg) Run:  kubectl --context functional-20220325012358-262786 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 cache add k8s.gcr.io/pause:3.1
functional_test.go:1046: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 cache add k8s.gcr.io/pause:3.1: (1.231934426s)
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 cache add k8s.gcr.io/pause:3.3
functional_test.go:1046: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 cache add k8s.gcr.io/pause:3.3: (1.088159357s)
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220325012358-262786 /tmp/functional-20220325012358-262786768041995
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 cache add minikube-local-cache-test:functional-20220325012358-262786
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 cache delete minikube-local-cache-test:functional-20220325012358-262786
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220325012358-262786
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (347.1564ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 cache reload: (1.062673241s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:713: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 kubectl -- --context functional-20220325012358-262786 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:738: (dbg) Run:  out/kubectl --context functional-20220325012358-262786 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:754: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220325012358-262786 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0325 01:26:00.416226  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:26:00.421907  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:26:00.432133  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:26:00.452408  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:26:00.492666  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:26:00.572986  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:26:00.733363  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:26:01.053970  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:26:01.694978  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
functional_test.go:754: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220325012358-262786 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.583744127s)
functional_test.go:758: restart took 41.583864389s for "functional-20220325012358-262786" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:807: (dbg) Run:  kubectl --context functional-20220325012358-262786 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:822: etcd phase: Running
functional_test.go:832: etcd status: Ready
functional_test.go:822: kube-apiserver phase: Running
functional_test.go:832: kube-apiserver status: Ready
functional_test.go:822: kube-controller-manager phase: Running
functional_test.go:832: kube-controller-manager status: Ready
functional_test.go:822: kube-scheduler phase: Running
functional_test.go:832: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 logs
E0325 01:26:02.975813  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 logs: (1.061790646s)
--- PASS: TestFunctional/serial/LogsCmd (1.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 logs --file /tmp/functional-20220325012358-2627862677253816/logs.txt
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 logs --file /tmp/functional-20220325012358-2627862677253816/logs.txt: (1.082481297s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 config get cpus: exit status 14 (71.127418ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 config get cpus: exit status 14 (116.521019ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220325012358-262786 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:907: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220325012358-262786 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:507: unable to kill pid 300129: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220325012358-262786 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:971: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220325012358-262786 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (212.068788ms)

                                                
                                                
-- stdout --
	* [functional-20220325012358-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 01:26:34.460633  298807 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:26:34.460741  298807 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:26:34.460753  298807 out.go:310] Setting ErrFile to fd 2...
	I0325 01:26:34.460757  298807 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:26:34.460872  298807 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:26:34.461115  298807 out.go:304] Setting JSON to false
	I0325 01:26:34.462451  298807 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14667,"bootTime":1648156928,"procs":621,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:26:34.462514  298807 start.go:125] virtualization: kvm guest
	I0325 01:26:34.465477  298807 out.go:176] * [functional-20220325012358-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 01:26:34.467262  298807 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 01:26:34.468632  298807 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 01:26:34.470035  298807 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:26:34.471472  298807 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 01:26:34.472932  298807 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 01:26:34.473374  298807 config.go:176] Loaded profile config "functional-20220325012358-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:26:34.473768  298807 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 01:26:34.512073  298807 docker.go:136] docker version: linux-20.10.14
	I0325 01:26:34.512150  298807 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:26:34.602589  298807 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-03-25 01:26:34.541155518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:26:34.602686  298807 docker.go:253] overlay module found
	I0325 01:26:34.605032  298807 out.go:176] * Using the docker driver based on existing profile
	I0325 01:26:34.605059  298807 start.go:284] selected driver: docker
	I0325 01:26:34.605070  298807 start.go:801] validating driver "docker" against &{Name:functional-20220325012358-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:functional-20220325012358-262786 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-se
curity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:26:34.605179  298807 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 01:26:34.605221  298807 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 01:26:34.605250  298807 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 01:26:34.606872  298807 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 01:26:34.608985  298807 out.go:176] 
	W0325 01:26:34.609088  298807 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0325 01:26:34.610626  298807 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:988: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220325012358-262786 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1017: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220325012358-262786 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1017: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220325012358-262786 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (209.448977ms)

                                                
                                                
-- stdout --
	* [functional-20220325012358-262786] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 01:26:29.557336  297690 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:26:29.557427  297690 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:26:29.557436  297690 out.go:310] Setting ErrFile to fd 2...
	I0325 01:26:29.557440  297690 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:26:29.557598  297690 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:26:29.557806  297690 out.go:304] Setting JSON to false
	I0325 01:26:29.559081  297690 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14662,"bootTime":1648156928,"procs":613,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:26:29.559135  297690 start.go:125] virtualization: kvm guest
	I0325 01:26:29.562133  297690 out.go:176] * [functional-20220325012358-262786] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	I0325 01:26:29.563739  297690 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 01:26:29.565135  297690 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 01:26:29.566606  297690 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:26:29.568124  297690 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 01:26:29.569519  297690 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 01:26:29.569902  297690 config.go:176] Loaded profile config "functional-20220325012358-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:26:29.570266  297690 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 01:26:29.608986  297690 docker.go:136] docker version: linux-20.10.14
	I0325 01:26:29.609100  297690 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:26:29.697432  297690 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-03-25 01:26:29.636756671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:26:29.697607  297690 docker.go:253] overlay module found
	I0325 01:26:29.701168  297690 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0325 01:26:29.701193  297690 start.go:284] selected driver: docker
	I0325 01:26:29.701199  297690 start.go:801] validating driver "docker" against &{Name:functional-20220325012358-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:functional-20220325012358-262786 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-se
curity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0325 01:26:29.701340  297690 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 01:26:29.701372  297690 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 01:26:29.701393  297690 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0325 01:26:29.703090  297690 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 01:26:29.705037  297690 out.go:176] 
	W0325 01:26:29.705139  297690 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0325 01:26:29.706570  297690 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:851: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) Run:  kubectl --context functional-20220325012358-262786 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Run:  kubectl --context functional-20220325012358-262786 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1454: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-tjqw7" [35927351-1f2d-4a72-bd85-82a3e82fa405] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-tjqw7" [35927351-1f2d-4a72-bd85-82a3e82fa405] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1454: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.020772203s
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1473: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1486: found endpoint: https://192.168.49.2:30838
functional_test.go:1501: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1521: found endpoint for hello-node: http://192.168.49.2:30838
--- PASS: TestFunctional/parallel/ServiceCmd (11.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) Run:  kubectl --context functional-20220325012358-262786 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1575: (dbg) Run:  kubectl --context functional-20220325012358-262786 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1580: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:343: "hello-node-connect-74cf8bc446-8wgtw" [1a5d86f6-c695-43cd-abe0-9a10889be394] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:343: "hello-node-connect-74cf8bc446-8wgtw" [1a5d86f6-c695-43cd-abe0-9a10889be394] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1580: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.008865907s
functional_test.go:1589: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1595: found endpoint for hello-node-connect: http://192.168.49.2:32252
functional_test.go:1615: http://192.168.49.2:32252: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-8wgtw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32252
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1630: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 addons list
functional_test.go:1642: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [bf6e09dd-832a-4ad7-aa65-b442437d7390] Running
E0325 01:26:10.657487  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.274546665s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220325012358-262786 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220325012358-262786 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220325012358-262786 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220325012358-262786 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [604d252f-072f-4c46-b859-f8817c69d413] Pending
helpers_test.go:343: "sp-pod" [604d252f-072f-4c46-b859-f8817c69d413] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [604d252f-072f-4c46-b859-f8817c69d413] Running
E0325 01:26:20.897814  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006600602s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220325012358-262786 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220325012358-262786 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20220325012358-262786 delete -f testdata/storage-provisioner/pod.yaml: (1.482183104s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220325012358-262786 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [ecc2e905-38b1-4e98-94af-5b94e6afa011] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [ecc2e905-38b1-4e98-94af-5b94e6afa011] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006702408s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220325012358-262786 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1665: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1682: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh -n functional-20220325012358-262786 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 cp functional-20220325012358-262786:/home/docker/cp-test.txt /tmp/mk_test93729908/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh -n functional-20220325012358-262786 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-20220325012358-262786 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1740: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-c8t85" [cd559e6a-1ea2-4840-924f-c82e38f50085] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-c8t85" [cd559e6a-1ea2-4840-924f-c82e38f50085] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1740: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.014044463s
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220325012358-262786 exec mysql-b87c45988-c8t85 -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220325012358-262786 exec mysql-b87c45988-c8t85 -- mysql -ppassword -e "show databases;": exit status 1 (178.814645ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220325012358-262786 exec mysql-b87c45988-c8t85 -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220325012358-262786 exec mysql-b87c45988-c8t85 -- mysql -ppassword -e "show databases;": exit status 1 (203.470034ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220325012358-262786 exec mysql-b87c45988-c8t85 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220325012358-262786 exec mysql-b87c45988-c8t85 -- mysql -ppassword -e "show databases;": exit status 1 (170.591049ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220325012358-262786 exec mysql-b87c45988-c8t85 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1870: Checking for existence of /etc/test/nested/copy/262786/hosts within VM
functional_test.go:1872: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo cat /etc/test/nested/copy/262786/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1877: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1913: Checking for existence of /etc/ssl/certs/262786.pem within VM
functional_test.go:1914: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo cat /etc/ssl/certs/262786.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1913: Checking for existence of /usr/share/ca-certificates/262786.pem within VM
functional_test.go:1914: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo cat /usr/share/ca-certificates/262786.pem"
E0325 01:26:05.536469  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1913: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1914: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1940: Checking for existence of /etc/ssl/certs/2627862.pem within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo cat /etc/ssl/certs/2627862.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1940: Checking for existence of /usr/share/ca-certificates/2627862.pem within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo cat /usr/share/ca-certificates/2627862.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1940: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20220325012358-262786 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo systemctl is-active docker": exit status 1 (400.64622ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo systemctl is-active crio": exit status 1 (430.780086ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.3
k8s.gcr.io/kube-proxy:v1.23.3
k8s.gcr.io/kube-controller-manager:v1.23.3
k8s.gcr.io/kube-apiserver:v1.23.3
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220325012358-262786
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220325012358-262786
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2200: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2214: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2214: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 version -o=json --components: (1.313589155s)
--- PASS: TestFunctional/parallel/Version/components (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls --format table:
|---------------------------------------------|----------------------------------|---------------|--------|
|                    Image                    |               Tag                |   Image ID    |  Size  |
|---------------------------------------------|----------------------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-20220325012358-262786 | sha256:d7ee49 | 1.74kB |
| k8s.gcr.io/kube-controller-manager          | v1.23.3                          | sha256:b07520 | 30.2MB |
| k8s.gcr.io/kube-proxy                       | v1.23.3                          | sha256:9b7cc9 | 39.3MB |
| k8s.gcr.io/pause                            | 3.1                              | sha256:da86e6 | 315kB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.3                          | sha256:99a348 | 15.1MB |
| k8s.gcr.io/pause                            | 3.3                              | sha256:0184c1 | 298kB  |
| k8s.gcr.io/pause                            | 3.6                              | sha256:6270bb | 302kB  |
| docker.io/kindest/kindnetd                  | v20210326-1e038dc5               | sha256:6de166 | 54MB   |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                           | sha256:7801cf | 15MB   |
| docker.io/library/nginx                     | latest                           | sha256:f2f70a | 56.7MB |
| gcr.io/google-containers/addon-resizer      | functional-20220325012358-262786 | sha256:ffd4cf | 10.8MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                          | sha256:25f8c7 | 98.9MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.3                          | sha256:f40be0 | 32.6MB |
| k8s.gcr.io/pause                            | latest                           | sha256:350b16 | 72.3kB |
| docker.io/kubernetesui/dashboard            | v2.3.1                           | sha256:e1482a | 66.9MB |
| docker.io/library/mysql                     | 5.7                              | sha256:05311a | 155MB  |
| docker.io/library/nginx                     | alpine                           | sha256:d7c7c5 | 10.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                               | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                           | sha256:a4ca41 | 13.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                     | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/echoserver                       | 1.8                              | sha256:82e4c8 | 46.2MB |
|---------------------------------------------|----------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls --format json:
[{"id":"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","repoDigests":["docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"],"repoTags":["docker.io/kindest/kindnetd:v20210326-1e038dc5"],"size":"53960776"},{"id":"sha256:d7ee49658fad803badbe274d6230e4fcac7b6f6ea28f51cb24800ac7e5b05071","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220325012358-262786"],"size":"1737"},{"id":"sha256:f2f70adc5d89aa922836e9cc6801980a12a7ff9012446cc6edf52ef8798a67bd","repoDigests":["docker.io/library/nginx@sha256:4ed64c2e0857ad21c38b98345ebb5edb01791a0a10b0e9e3d9ddde185cdbd31a"],"repoTags":["docker.io/library/nginx:latest"],"size":"56743155"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":["k8s.gcr.io/etcd@sha256:64b9ea
357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263"],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"98888614"},{"id":"sha256:9b7cc9982109819e8fe5b0b6c0d3122790f88275e13b02f79e7e9e307466aa1b","repoDigests":["k8s.gcr.io/kube-proxy@sha256:def87f007b49d50693aed83d4703d0e56c69ae286154b1c7a20cd1b3a320cf7c"],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.3"],"size":"39274184"},{"id":"sha256:99a3486be4f2837c939313935007928f97b81a1cf11495808d81ad6b14c04078","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:32308abe86f7415611ca86ee79dd0a73e74ebecb2f9e3eb85fc3a8e62f03d0e7"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.3"],"size":"15130695"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],
"size":"46237695"},{"id":"sha256:b07520cd7ab76ec98ea6c07ae56d21d65f29708c24f90a55a3c30d823419577e","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:b721871d9a9c55836cbcbb2bf375e02696260628f73620b267be9a9a50c97f5a"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.3"],"size":"30166158"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172"],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"15029138"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220325012358-262786"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","
repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":["k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db"],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"301773"},{"id":"sha256:e1482a24335a6e76d438ae175f79
409004588570d3e5dbb4c8140e025e848570","repoDigests":["docker.io/kubernetesui/dashboard@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e"],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"66934416"},{"id":"sha256:05311a87aeb4d7f98b2726c39d4d29d6a174d20953a6d1ceaa236bfa177f5fb6","repoDigests":["docker.io/library/mysql@sha256:c8f68301981a7224cc9c063fc7a97b6ef13cfc4142b4871d1a35c95777ce96f4"],"repoTags":["docker.io/library/mysql:5.7"],"size":"155429429"},{"id":"sha256:d7c7c5df4c3a3b3ceee3236b343877b77bb429e1ec745e9681c5b182bfe8f99b","repoDigests":["docker.io/library/nginx@sha256:250c11e0c39dc17ba617f3eb0b28b6f456d46483f04823154c6aa68c432ded72"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10169440"},{"id":"sha256:f40be0088a83e79642d0a2a1bbc55e61b9289167385e67701b82ea85fc9bbfc4","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:b8eba88862bab7d3d7cdddad669ff1ece006baa10d3a3df119683434497a0949"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.3"],"size":"32599280"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls --format yaml
2022/03/25 01:26:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls --format yaml:
- id: sha256:9b7cc9982109819e8fe5b0b6c0d3122790f88275e13b02f79e7e9e307466aa1b
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:def87f007b49d50693aed83d4703d0e56c69ae286154b1c7a20cd1b3a320cf7c
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.3
size: "39274184"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:d7ee49658fad803badbe274d6230e4fcac7b6f6ea28f51cb24800ac7e5b05071
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220325012358-262786
size: "1737"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220325012358-262786
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:f40be0088a83e79642d0a2a1bbc55e61b9289167385e67701b82ea85fc9bbfc4
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:b8eba88862bab7d3d7cdddad669ff1ece006baa10d3a3df119683434497a0949
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.3
size: "32599280"
- id: sha256:b07520cd7ab76ec98ea6c07ae56d21d65f29708c24f90a55a3c30d823419577e
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:b721871d9a9c55836cbcbb2bf375e02696260628f73620b267be9a9a50c97f5a
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.3
size: "30166158"
- id: sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests:
- k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
repoTags:
- k8s.gcr.io/pause:3.6
size: "301773"
- id: sha256:7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "15029138"
- id: sha256:f2f70adc5d89aa922836e9cc6801980a12a7ff9012446cc6edf52ef8798a67bd
repoDigests:
- docker.io/library/nginx@sha256:4ed64c2e0857ad21c38b98345ebb5edb01791a0a10b0e9e3d9ddde185cdbd31a
repoTags:
- docker.io/library/nginx:latest
size: "56743155"
- id: sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb
repoDigests:
- docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c
repoTags:
- docker.io/kindest/kindnetd:v20210326-1e038dc5
size: "53960776"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests:
- k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "98888614"
- id: sha256:99a3486be4f2837c939313935007928f97b81a1cf11495808d81ad6b14c04078
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:32308abe86f7415611ca86ee79dd0a73e74ebecb2f9e3eb85fc3a8e62f03d0e7
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.3
size: "15130695"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "66934416"
- id: sha256:05311a87aeb4d7f98b2726c39d4d29d6a174d20953a6d1ceaa236bfa177f5fb6
repoDigests:
- docker.io/library/mysql@sha256:c8f68301981a7224cc9c063fc7a97b6ef13cfc4142b4871d1a35c95777ce96f4
repoTags:
- docker.io/library/mysql:5.7
size: "155429429"
- id: sha256:d7c7c5df4c3a3b3ceee3236b343877b77bb429e1ec745e9681c5b182bfe8f99b
repoDigests:
- docker.io/library/nginx@sha256:250c11e0c39dc17ba617f3eb0b28b6f456d46483f04823154c6aa68c432ded72
repoTags:
- docker.io/library/nginx:alpine
size: "10169440"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh pgrep buildkitd: exit status 1 (402.596865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image build -t localhost/my-image:functional-20220325012358-262786 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 image build -t localhost/my-image:functional-20220325012358-262786 testdata/build: (1.838468952s)
functional_test.go:320: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220325012358-262786 image build -t localhost/my-image:functional-20220325012358-262786 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.5s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.0s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:701e58809e5b7c75c12eb564f0c105f593441fb63f6523b881fd111d2c43a6f6 done
#8 exporting config sha256:2f88a58040e0815e929916331f81ce920e5806298960187e84f49c15a864eb60 done
#8 naming to localhost/my-image:functional-20220325012358-262786 done
#8 DONE 0.4s
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220325012358-262786
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2060: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2060: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2060: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1276: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220325012358-262786

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220325012358-262786: (5.855879964s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1316: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1321: Took "428.414181ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1335: Took "78.6055ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1367: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1372: Took "391.035529ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1380: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1385: Took "73.411092ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220325012358-262786 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220325012358-262786 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [6412f445-b43c-4ec3-aeed-e8c527ba7633] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [6412f445-b43c-4ec3-aeed-e8c527ba7633] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.033843807s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220325012358-262786

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220325012358-262786: (6.484748854s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220325012358-262786
functional_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220325012358-262786

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220325012358-262786: (4.756895306s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220325012358-262786 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://10.96.111.115 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220325012358-262786 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image save gcr.io/google-containers/addon-resizer:functional-20220325012358-262786 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image rm gcr.io/google-containers/addon-resizer:functional-20220325012358-262786

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (2.087497442s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220325012358-262786
functional_test.go:421: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220325012358-262786

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:421: (dbg) Done: out/minikube-linux-amd64 -p functional-20220325012358-262786 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220325012358-262786: (1.074835975s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220325012358-262786
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220325012358-262786 /tmp/mounttest3241448040:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1648171589709820432" to /tmp/mounttest3241448040/created-by-test
functional_test_mount_test.go:110: wrote "test-1648171589709820432" to /tmp/mounttest3241448040/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1648171589709820432" to /tmp/mounttest3241448040/test-1648171589709820432
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.803578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 25 01:26 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 25 01:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 25 01:26 test-1648171589709820432
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh cat /mount-9p/test-1648171589709820432
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220325012358-262786 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [0abb4c69-79c2-4f6d-9e24-2f36946f2d62] Pending
helpers_test.go:343: "busybox-mount" [0abb4c69-79c2-4f6d-9e24-2f36946f2d62] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [0abb4c69-79c2-4f6d-9e24-2f36946f2d62] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [0abb4c69-79c2-4f6d-9e24-2f36946f2d62] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.00685408s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220325012358-262786 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220325012358-262786 /tmp/mounttest3241448040:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220325012358-262786 /tmp/mounttest1571204190:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (448.630822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220325012358-262786 /tmp/mounttest1571204190:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh "sudo umount -f /mount-9p": exit status 1 (351.207454ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20220325012358-262786 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220325012358-262786 /tmp/mounttest1571204190:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220325012358-262786
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220325012358-262786
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220325012358-262786
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (115.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220325012643-262786 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0325 01:27:22.339094  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220325012643-262786 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m55.247387081s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (115.25s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 addons enable ingress --alsologtostderr -v=5
E0325 01:28:44.259560  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 addons enable ingress --alsologtostderr -v=5: (9.135803476s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.14s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (33.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220325012643-262786 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220325012643-262786 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.388899781s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220325012643-262786 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220325012643-262786 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [f148d05d-5c0e-4a28-99c5-7fbdffe9d8bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [f148d05d-5c0e-4a28-99c5-7fbdffe9d8bf] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.005820683s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20220325012643-262786 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 addons disable ingress-dns --alsologtostderr -v=1: (4.5662097s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 addons disable ingress --alsologtostderr -v=1
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220325012643-262786 addons disable ingress --alsologtostderr -v=1: (7.257627394s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (33.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220325012924-262786 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220325012924-262786 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m7.561199102s)
--- PASS: TestJSONOutput/start/Command (67.56s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220325012924-262786 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220325012924-262786 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (15.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220325012924-262786 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220325012924-262786 --output=json --user=testUser: (15.708832136s)
--- PASS: TestJSONOutput/stop/Command (15.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220325013054-262786 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220325013054-262786 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.160534ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a628783-a2da-46c6-823e-f77869ef5bf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220325013054-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"417ccfed-f705-4048-822b-12511f6a1adc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13812"}}
	{"specversion":"1.0","id":"9ff2d733-2046-4518-816a-73c14695218d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c42c5386-3215-4c90-a5fa-02d144237dcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig"}}
	{"specversion":"1.0","id":"3b0c22a2-f145-48b0-b748-7abbcb6c0e70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube"}}
	{"specversion":"1.0","id":"a29d54ed-1e03-481d-b80e-3119d9ee11e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"42b7091f-dcc3-41b1-a9c4-a8100c8bf90c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220325013054-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220325013054-262786
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220325013054-262786 --network=
E0325 01:31:00.416610  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:31:05.108553  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:05.113855  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:05.124140  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:05.144431  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:05.184694  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:05.265047  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:05.425494  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:05.746230  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:06.387325  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:07.667866  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:10.228328  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:15.348858  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:31:25.589883  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220325013054-262786 --network=: (33.003186005s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220325013054-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220325013054-262786
E0325 01:31:28.099833  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220325013054-262786: (2.312413872s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.35s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220325013129-262786 --network=bridge
E0325 01:31:46.070842  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220325013129-262786 --network=bridge: (26.341857321s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220325013129-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220325013129-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220325013129-262786: (2.12813602s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.50s)

                                                
                                    
x
+
TestKicExistingNetwork (28.48s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220325013158-262786 --network=existing-network
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220325013158-262786 --network=existing-network: (25.947973765s)
helpers_test.go:176: Cleaning up "existing-network-20220325013158-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220325013158-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220325013158-262786: (2.31683966s)
--- PASS: TestKicExistingNetwork (28.48s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220325013226-262786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0325 01:32:27.031054  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220325013226-262786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.703655332s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220325013226-262786 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220325013226-262786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220325013226-262786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.620877067s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220325013226-262786 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220325013226-262786 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220325013226-262786 --alsologtostderr -v=5: (1.86140694s)
--- PASS: TestMountStart/serial/DeleteFirst (1.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220325013226-262786 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220325013226-262786
mount_start_test.go:156: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220325013226-262786: (1.264858676s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.38s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220325013226-262786
mount_start_test.go:167: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220325013226-262786: (5.379823075s)
--- PASS: TestMountStart/serial/RestartStopped (6.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220325013226-262786 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220325013248-262786 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0325 01:33:47.791120  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:47.796408  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:47.806712  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:47.826967  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:47.867283  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:47.947620  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:48.108078  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:48.428677  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:48.951303  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
E0325 01:33:49.069542  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:50.349930  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:52.911082  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:33:58.032212  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:34:08.272578  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:34:28.753633  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220325013248-262786 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m43.418504048s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:491: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- rollout status deployment/busybox: (1.870970603s)
multinode_test.go:497: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-9km2t -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-c6f4h -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-9km2t -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-c6f4h -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-9km2t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-c6f4h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-9km2t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-9km2t -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-c6f4h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220325013248-262786 -- exec busybox-7978565885-c6f4h -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220325013248-262786 -v 3 --alsologtostderr
E0325 01:35:09.715100  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220325013248-262786 -v 3 --alsologtostderr: (42.352743442s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --output json --alsologtostderr
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp testdata/cp-test.txt multinode-20220325013248-262786:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786:/home/docker/cp-test.txt /tmp/mk_cp_test4096231143/cp-test_multinode-20220325013248-262786.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786:/home/docker/cp-test.txt multinode-20220325013248-262786-m02:/home/docker/cp-test_multinode-20220325013248-262786_multinode-20220325013248-262786-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m02 "sudo cat /home/docker/cp-test_multinode-20220325013248-262786_multinode-20220325013248-262786-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786:/home/docker/cp-test.txt multinode-20220325013248-262786-m03:/home/docker/cp-test_multinode-20220325013248-262786_multinode-20220325013248-262786-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m03 "sudo cat /home/docker/cp-test_multinode-20220325013248-262786_multinode-20220325013248-262786-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp testdata/cp-test.txt multinode-20220325013248-262786-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786-m02:/home/docker/cp-test.txt /tmp/mk_cp_test4096231143/cp-test_multinode-20220325013248-262786-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786-m02:/home/docker/cp-test.txt multinode-20220325013248-262786:/home/docker/cp-test_multinode-20220325013248-262786-m02_multinode-20220325013248-262786.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786 "sudo cat /home/docker/cp-test_multinode-20220325013248-262786-m02_multinode-20220325013248-262786.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786-m02:/home/docker/cp-test.txt multinode-20220325013248-262786-m03:/home/docker/cp-test_multinode-20220325013248-262786-m02_multinode-20220325013248-262786-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m03 "sudo cat /home/docker/cp-test_multinode-20220325013248-262786-m02_multinode-20220325013248-262786-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp testdata/cp-test.txt multinode-20220325013248-262786-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786-m03:/home/docker/cp-test.txt /tmp/mk_cp_test4096231143/cp-test_multinode-20220325013248-262786-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786-m03:/home/docker/cp-test.txt multinode-20220325013248-262786:/home/docker/cp-test_multinode-20220325013248-262786-m03_multinode-20220325013248-262786.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786 "sudo cat /home/docker/cp-test_multinode-20220325013248-262786-m03_multinode-20220325013248-262786.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 cp multinode-20220325013248-262786-m03:/home/docker/cp-test.txt multinode-20220325013248-262786-m02:/home/docker/cp-test_multinode-20220325013248-262786-m03_multinode-20220325013248-262786-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 ssh -n multinode-20220325013248-262786-m02 "sudo cat /home/docker/cp-test_multinode-20220325013248-262786-m03_multinode-20220325013248-262786-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (6.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220325013248-262786 node stop m03: (5.732026961s)
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220325013248-262786 status: exit status 7 (599.542522ms)

                                                
                                                
-- stdout --
	multinode-20220325013248-262786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220325013248-262786-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220325013248-262786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --alsologtostderr: exit status 7 (597.505564ms)

                                                
                                                
-- stdout --
	multinode-20220325013248-262786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220325013248-262786-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220325013248-262786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 01:35:38.846724  342110 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:35:38.846883  342110 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:35:38.846900  342110 out.go:310] Setting ErrFile to fd 2...
	I0325 01:35:38.846909  342110 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:35:38.847106  342110 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:35:38.847257  342110 out.go:304] Setting JSON to false
	I0325 01:35:38.847276  342110 mustload.go:65] Loading cluster: multinode-20220325013248-262786
	I0325 01:35:38.847580  342110 config.go:176] Loaded profile config "multinode-20220325013248-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:35:38.847601  342110 status.go:253] checking status of multinode-20220325013248-262786 ...
	I0325 01:35:38.847933  342110 cli_runner.go:133] Run: docker container inspect multinode-20220325013248-262786 --format={{.State.Status}}
	I0325 01:35:38.880119  342110 status.go:328] multinode-20220325013248-262786 host status = "Running" (err=<nil>)
	I0325 01:35:38.880156  342110 host.go:66] Checking if "multinode-20220325013248-262786" exists ...
	I0325 01:35:38.880417  342110 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220325013248-262786
	I0325 01:35:38.913497  342110 host.go:66] Checking if "multinode-20220325013248-262786" exists ...
	I0325 01:35:38.913889  342110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:35:38.913959  342110 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220325013248-262786
	I0325 01:35:38.944313  342110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/multinode-20220325013248-262786/id_rsa Username:docker}
	I0325 01:35:39.027142  342110 ssh_runner.go:195] Run: systemctl --version
	I0325 01:35:39.030613  342110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 01:35:39.039438  342110 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:35:39.128894  342110 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 01:35:39.068283023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:35:39.129487  342110 kubeconfig.go:92] found "multinode-20220325013248-262786" server: "https://192.168.49.2:8443"
	I0325 01:35:39.129524  342110 api_server.go:165] Checking apiserver status ...
	I0325 01:35:39.129558  342110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0325 01:35:39.138989  342110 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1194/cgroup
	I0325 01:35:39.146396  342110 api_server.go:181] apiserver freezer: "12:freezer:/docker/4816477ba11410b98d9f2390df6a605dd4af8721892957c50062ce59932e8b8e/kubepods/burstable/podca1e3c86c7d03934032349aa09265a0a/e8209c07afb98eec929526040823d58e944f553b74e603f9521c41193aadafac"
	I0325 01:35:39.146452  342110 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4816477ba11410b98d9f2390df6a605dd4af8721892957c50062ce59932e8b8e/kubepods/burstable/podca1e3c86c7d03934032349aa09265a0a/e8209c07afb98eec929526040823d58e944f553b74e603f9521c41193aadafac/freezer.state
	I0325 01:35:39.152636  342110 api_server.go:203] freezer state: "THAWED"
	I0325 01:35:39.152681  342110 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0325 01:35:39.157556  342110 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0325 01:35:39.157578  342110 status.go:419] multinode-20220325013248-262786 apiserver status = Running (err=<nil>)
	I0325 01:35:39.157588  342110 status.go:255] multinode-20220325013248-262786 status: &{Name:multinode-20220325013248-262786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0325 01:35:39.157602  342110 status.go:253] checking status of multinode-20220325013248-262786-m02 ...
	I0325 01:35:39.157828  342110 cli_runner.go:133] Run: docker container inspect multinode-20220325013248-262786-m02 --format={{.State.Status}}
	I0325 01:35:39.190944  342110 status.go:328] multinode-20220325013248-262786-m02 host status = "Running" (err=<nil>)
	I0325 01:35:39.190980  342110 host.go:66] Checking if "multinode-20220325013248-262786-m02" exists ...
	I0325 01:35:39.191320  342110 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220325013248-262786-m02
	I0325 01:35:39.221553  342110 host.go:66] Checking if "multinode-20220325013248-262786-m02" exists ...
	I0325 01:35:39.221855  342110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0325 01:35:39.221899  342110 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220325013248-262786-m02
	I0325 01:35:39.251981  342110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/multinode-20220325013248-262786-m02/id_rsa Username:docker}
	I0325 01:35:39.335237  342110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0325 01:35:39.344441  342110 status.go:255] multinode-20220325013248-262786-m02 status: &{Name:multinode-20220325013248-262786-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0325 01:35:39.344481  342110 status.go:253] checking status of multinode-20220325013248-262786-m03 ...
	I0325 01:35:39.344740  342110 cli_runner.go:133] Run: docker container inspect multinode-20220325013248-262786-m03 --format={{.State.Status}}
	I0325 01:35:39.380693  342110 status.go:328] multinode-20220325013248-262786-m03 host status = "Stopped" (err=<nil>)
	I0325 01:35:39.380718  342110 status.go:341] host is not running, skipping remaining checks
	I0325 01:35:39.380724  342110 status.go:255] multinode-20220325013248-262786-m03 status: &{Name:multinode-20220325013248-262786-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (6.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 node start m03 --alsologtostderr
E0325 01:36:00.416246  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:36:05.105746  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220325013248-262786 node start m03 --alsologtostderr: (35.275996793s)
multinode_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (190.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220325013248-262786
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220325013248-262786
E0325 01:36:31.636149  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:36:32.792446  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220325013248-262786: (45.787012572s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220325013248-262786 --wait=true -v=8 --alsologtostderr
E0325 01:38:47.791088  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
E0325 01:39:15.476950  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220325013248-262786 --wait=true -v=8 --alsologtostderr: (2m24.999054041s)
multinode_test.go:305: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220325013248-262786
--- PASS: TestMultiNode/serial/RestartKeepsNodes (190.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220325013248-262786 node delete m03: (9.012600032s)
multinode_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --alsologtostderr
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 stop
multinode_test.go:319: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220325013248-262786 stop: (40.100802324s)
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220325013248-262786 status: exit status 7 (125.563497ms)

                                                
                                                
-- stdout --
	multinode-20220325013248-262786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220325013248-262786-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --alsologtostderr: exit status 7 (121.072026ms)

                                                
                                                
-- stdout --
	multinode-20220325013248-262786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220325013248-262786-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 01:40:16.506315  352558 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:40:16.506413  352558 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:40:16.506417  352558 out.go:310] Setting ErrFile to fd 2...
	I0325 01:40:16.506421  352558 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:40:16.506515  352558 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:40:16.506665  352558 out.go:304] Setting JSON to false
	I0325 01:40:16.506684  352558 mustload.go:65] Loading cluster: multinode-20220325013248-262786
	I0325 01:40:16.507031  352558 config.go:176] Loaded profile config "multinode-20220325013248-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:40:16.507054  352558 status.go:253] checking status of multinode-20220325013248-262786 ...
	I0325 01:40:16.507457  352558 cli_runner.go:133] Run: docker container inspect multinode-20220325013248-262786 --format={{.State.Status}}
	I0325 01:40:16.537814  352558 status.go:328] multinode-20220325013248-262786 host status = "Stopped" (err=<nil>)
	I0325 01:40:16.537834  352558 status.go:341] host is not running, skipping remaining checks
	I0325 01:40:16.537839  352558 status.go:255] multinode-20220325013248-262786 status: &{Name:multinode-20220325013248-262786 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0325 01:40:16.537867  352558 status.go:253] checking status of multinode-20220325013248-262786-m02 ...
	I0325 01:40:16.538088  352558 cli_runner.go:133] Run: docker container inspect multinode-20220325013248-262786-m02 --format={{.State.Status}}
	I0325 01:40:16.568124  352558 status.go:328] multinode-20220325013248-262786-m02 host status = "Stopped" (err=<nil>)
	I0325 01:40:16.568145  352558 status.go:341] host is not running, skipping remaining checks
	I0325 01:40:16.568150  352558 status.go:255] multinode-20220325013248-262786-m02 status: &{Name:multinode-20220325013248-262786-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (117.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220325013248-262786 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0325 01:41:00.415778  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:41:05.105922  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220325013248-262786 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m56.840578178s)
multinode_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220325013248-262786 status --alsologtostderr
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (117.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220325013248-262786
multinode_test.go:457: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220325013248-262786-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220325013248-262786-m02 --driver=docker  --container-runtime=containerd: exit status 14 (74.836123ms)

                                                
                                                
-- stdout --
	* [multinode-20220325013248-262786-m02] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220325013248-262786-m02' is duplicated with machine name 'multinode-20220325013248-262786-m02' in profile 'multinode-20220325013248-262786'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220325013248-262786-m03 --driver=docker  --container-runtime=containerd
E0325 01:42:23.462469  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220325013248-262786-m03 --driver=docker  --container-runtime=containerd: (41.518367673s)
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220325013248-262786
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220325013248-262786: exit status 80 (336.405687ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220325013248-262786
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220325013248-262786-m03 already exists in multinode-20220325013248-262786-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220325013248-262786-m03
multinode_test.go:477: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220325013248-262786-m03: (2.67784477s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.67s)

                                                
                                    
x
+
TestPreload (114.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220325014303-262786 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0325 01:43:47.790654  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220325014303-262786 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m4.923093134s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220325014303-262786 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220325014303-262786 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.066693614s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220325014303-262786 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220325014303-262786 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (45.248616194s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220325014303-262786 -- sudo crictl image ls
helpers_test.go:176: Cleaning up "test-preload-20220325014303-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220325014303-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220325014303-262786: (2.648521994s)
--- PASS: TestPreload (114.24s)

                                                
                                    
x
+
TestScheduledStopUnix (118.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220325014457-262786 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220325014457-262786 --memory=2048 --driver=docker  --container-runtime=containerd: (41.669054384s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220325014457-262786 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220325014457-262786 -n scheduled-stop-20220325014457-262786
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220325014457-262786 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220325014457-262786 --cancel-scheduled
E0325 01:46:00.416877  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220325014457-262786 -n scheduled-stop-20220325014457-262786
E0325 01:46:05.105533  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220325014457-262786
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220325014457-262786 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220325014457-262786
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220325014457-262786: exit status 7 (93.221579ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220325014457-262786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220325014457-262786 -n scheduled-stop-20220325014457-262786
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220325014457-262786 -n scheduled-stop-20220325014457-262786: exit status 7 (91.313421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220325014457-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220325014457-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220325014457-262786: (5.21090838s)
--- PASS: TestScheduledStopUnix (118.55s)

                                                
                                    
x
+
TestInsufficientStorage (18.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220325014656-262786 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220325014656-262786 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (11.823766184s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba0dd901-e6b9-45c9-bba6-c1373be18070","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220325014656-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7598869a-8600-4567-9c28-9a7814de0a83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13812"}}
	{"specversion":"1.0","id":"b084d548-9956-428a-9f4d-195ba862721f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27a471f0-3ee5-4843-bbf3-7a8490e027c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig"}}
	{"specversion":"1.0","id":"f6a51a0e-b73c-4b3e-aaa1-3e65059e2d3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube"}}
	{"specversion":"1.0","id":"04357c15-eef5-4d86-b366-80a52f27b729","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f5b0e999-a0af-445f-93b9-aaa128c5df5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5b8fc719-4250-40a2-acf7-12aeac7e83ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"28a380f2-5724-442e-9612-b329d3652088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6191d6d-2800-488e-ac69-3d0c7acff619","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"06a70223-c332-4284-89b6-67ce04890752","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"7b1abd3b-1848-48fa-96b8-03b30afb655e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220325014656-262786 in cluster insufficient-storage-20220325014656-262786","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"41ff96bd-9a94-4f7e-a957-c6bf4899b126","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e660f43-0b4e-4203-88fc-e10f5e704391","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce2641c8-c904-4a8f-bc70-f7781d95fe4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220325014656-262786 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220325014656-262786 --output=json --layout=cluster: exit status 7 (339.999266ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220325014656-262786","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220325014656-262786","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0325 01:47:08.278142  373005 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220325014656-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220325014656-262786 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220325014656-262786 --output=json --layout=cluster: exit status 7 (342.517569ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220325014656-262786","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220325014656-262786","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0325 01:47:08.621098  373104 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220325014656-262786" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	E0325 01:47:08.629345  373104 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/insufficient-storage-20220325014656-262786/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220325014656-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220325014656-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220325014656-262786: (6.127205431s)
--- PASS: TestInsufficientStorage (18.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (301.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1705474429.exe start -p running-upgrade-20220325014921-262786 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.1705474429.exe start -p running-upgrade-20220325014921-262786 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (4m16.049362062s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220325014921-262786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0325 01:53:47.791193  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220325014921-262786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.942957457s)
helpers_test.go:176: Cleaning up "running-upgrade-20220325014921-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220325014921-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220325014921-262786: (10.608388542s)
--- PASS: TestRunningBinaryUpgrade (301.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (186.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220325015003-262786 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0325 01:50:10.837833  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220325015003-262786 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.614529263s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220325015003-262786
E0325 01:51:00.415586  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:51:05.105449  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220325015003-262786: (6.217951633s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220325015003-262786 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220325015003-262786 status --format={{.Host}}: exit status 7 (116.354782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220325015003-262786 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220325015003-262786 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m44.360369059s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220325015003-262786 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220325015003-262786 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220325015003-262786 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (90.278261ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220325015003-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.4-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220325015003-262786
	    minikube start -p kubernetes-upgrade-20220325015003-262786 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220325015003-2627862 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.4-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220325015003-262786 --kubernetes-version=v1.23.4-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220325015003-262786 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220325015003-262786 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.224492161s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220325015003-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220325015003-262786

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220325015003-262786: (3.616024178s)
--- PASS: TestKubernetesUpgrade (186.30s)

                                                
                                    
x
+
TestMissingContainerUpgrade (111.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.3618167030.exe start -p missing-upgrade-20220325014930-262786 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.3618167030.exe start -p missing-upgrade-20220325014930-262786 --memory=2200 --driver=docker  --container-runtime=containerd: (52.691499711s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220325014930-262786
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220325014930-262786: (10.323808597s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220325014930-262786
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220325014930-262786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220325014930-262786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.163109687s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220325014930-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220325014930-262786
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220325014930-262786: (2.77615328s)
--- PASS: TestMissingContainerUpgrade (111.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (98.333743ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220325014714-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (67.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --driver=docker  --container-runtime=containerd: (1m6.922675389s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220325014714-262786 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (67.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (115.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.1265663422.exe start -p stopped-upgrade-20220325014714-262786 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0325 01:47:28.153671  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.1265663422.exe start -p stopped-upgrade-20220325014714-262786 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (56.106496924s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.1265663422.exe -p stopped-upgrade-20220325014714-262786 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.1265663422.exe -p stopped-upgrade-20220325014714-262786 stop: (1.447531924s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220325014714-262786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220325014714-262786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.172329188s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (115.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.002471817s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220325014714-262786 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220325014714-262786 status -o json: exit status 2 (380.736157ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220325014714-262786","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220325014714-262786
no_kubernetes_test.go:125: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220325014714-262786: (10.386768748s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --no-kubernetes --driver=docker  --container-runtime=containerd
E0325 01:48:47.791393  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.771648677s)
--- PASS: TestNoKubernetes/serial/Start (4.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220325014714-262786 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220325014714-262786 "sudo systemctl is-active --quiet service kubelet": exit status 1 (458.444286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (6.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220325014714-262786
no_kubernetes_test.go:159: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220325014714-262786: (6.239007136s)
--- PASS: TestNoKubernetes/serial/Stop (6.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:192: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220325014714-262786 --driver=docker  --container-runtime=containerd: (5.652710496s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220325014714-262786 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220325014714-262786 "sudo systemctl is-active --quiet service kubelet": exit status 1 (370.176199ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220325014714-262786
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:214: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220325014920-262786 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220325014920-262786 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (262.431969ms)

                                                
                                                
-- stdout --
	* [false-20220325014920-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0325 01:49:20.733626  398702 out.go:297] Setting OutFile to fd 1 ...
	I0325 01:49:20.733755  398702 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:49:20.733767  398702 out.go:310] Setting ErrFile to fd 2...
	I0325 01:49:20.733776  398702 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0325 01:49:20.733941  398702 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0325 01:49:20.734235  398702 out.go:304] Setting JSON to false
	I0325 01:49:20.735636  398702 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16033,"bootTime":1648156928,"procs":513,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0325 01:49:20.735711  398702 start.go:125] virtualization: kvm guest
	I0325 01:49:20.738606  398702 out.go:176] * [false-20220325014920-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0325 01:49:20.740374  398702 out.go:176]   - MINIKUBE_LOCATION=13812
	I0325 01:49:20.738855  398702 notify.go:193] Checking for updates...
	I0325 01:49:20.742069  398702 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0325 01:49:20.743913  398702 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0325 01:49:20.745433  398702 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0325 01:49:20.746990  398702 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0325 01:49:20.747745  398702 config.go:176] Loaded profile config "cert-expiration-20220325014851-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:49:20.748344  398702 config.go:176] Loaded profile config "cert-options-20220325014907-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:49:20.748543  398702 config.go:176] Loaded profile config "force-systemd-flag-20220325014827-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
	I0325 01:49:20.748607  398702 driver.go:346] Setting default libvirt URI to qemu:///system
	I0325 01:49:20.809507  398702 docker.go:136] docker version: linux-20.10.14
	I0325 01:49:20.809656  398702 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0325 01:49:20.918034  398702 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-25 01:49:20.845270497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0325 01:49:20.918198  398702 docker.go:253] overlay module found
	I0325 01:49:20.921225  398702 out.go:176] * Using the docker driver based on user configuration
	I0325 01:49:20.921266  398702 start.go:284] selected driver: docker
	I0325 01:49:20.921274  398702 start.go:801] validating driver "docker" against <nil>
	I0325 01:49:20.921305  398702 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0325 01:49:20.921359  398702 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0325 01:49:20.921387  398702 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0325 01:49:20.923093  398702 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0325 01:49:20.925491  398702 out.go:176] 
	W0325 01:49:20.925619  398702 out.go:241] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0325 01:49:20.927247  398702 out.go:176] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20220325014920-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220325014920-262786
--- PASS: TestNetworkPlugins/group/false (0.73s)

                                                
                                    
x
+
TestPause/serial/Start (70.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220325015121-262786 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220325015121-262786 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m10.66683946s)
--- PASS: TestPause/serial/Start (70.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (15.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220325015121-262786 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220325015121-262786 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.643151354s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (15.66s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220325015121-262786 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220325015121-262786 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220325015121-262786 --output=json --layout=cluster: exit status 2 (389.107005ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220325015121-262786","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220325015121-262786","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220325015121-262786 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.42s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220325015121-262786 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220325015121-262786 --alsologtostderr -v=5: (5.422457826s)
--- PASS: TestPause/serial/PauseAgain (5.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220325014919-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220325014919-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (59.866359561s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (9.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220325015121-262786 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220325015121-262786 --alsologtostderr -v=5: (9.839137912s)
--- PASS: TestPause/serial/DeletePaused (9.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.89s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220325015121-262786
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220325015121-262786: exit status 1 (37.068387ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220325015121-262786

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (90.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220325014921-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220325014921-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m30.986287244s)
--- PASS: TestNetworkPlugins/group/cilium/Start (90.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220325014919-262786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220325014919-262786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context auto-20220325014919-262786 replace --force -f testdata/netcat-deployment.yaml: (1.017863135s)
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-cjp67" [b38ceb83-3054-4333-a37b-b6c706cda72b] Pending
helpers_test.go:343: "netcat-668db85669-cjp67" [b38ceb83-3054-4333-a37b-b6c706cda72b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-cjp67" [b38ceb83-3054-4333-a37b-b6c706cda72b] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.17739397s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220325014919-262786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20220325014919-262786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20220325014919-262786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-twds2" [4b51de52-6d88-48c4-93cb-9b2326c32b0d] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015115199s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220325014921-262786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220325014921-262786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-w4r6v" [0cdb5ea5-fcdd-4f56-b55b-261795223bf0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-w4r6v" [0cdb5ea5-fcdd-4f56-b55b-261795223bf0] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.006337341s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220325014921-262786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220325014921-262786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220325014921-262786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220325014920-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd
E0325 01:56:00.415948  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 01:56:05.105080  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220325014920-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m11.37529606s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-sqq6l" [f4681712-732f-4c97-a171-96743c9634a6] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013106132s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220325014920-262786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20220325014920-262786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-97zd7" [b6d63a62-abe5-46b3-aba3-e0716cce6c08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-97zd7" [b6d63a62-abe5-46b3-aba3-e0716cce6c08] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.006425427s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220325014920-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0325 02:01:39.938740  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
E0325 02:02:24.139814  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220325014920-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (59.52345563s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220325014920-262786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220325014920-262786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-zhwjg" [035cfbeb-98ae-4eed-a01d-e7bdcdc88dcd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-zhwjg" [035cfbeb-98ae-4eed-a01d-e7bdcdc88dcd] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006827323s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220325014920-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220325014920-262786 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (57.552629262s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220325014920-262786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220325014920-262786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-9dnp4" [d7455150-9164-4871-80fe-b52e08dda5c7] Pending
helpers_test.go:343: "netcat-668db85669-9dnp4" [d7455150-9164-4871-80fe-b52e08dda5c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-9dnp4" [d7455150-9164-4871-80fe-b52e08dda5c7] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.007134562s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220325020743-262786 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220325020743-262786 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3: (59.229208997s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220325020743-262786 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [bdf5b7d1-9e91-49a1-9984-5b92b8fcb3de] Pending
helpers_test.go:343: "busybox" [bdf5b7d1-9e91-49a1-9984-5b92b8fcb3de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [bdf5b7d1-9e91-49a1-9984-5b92b8fcb3de] Running
E0325 02:08:47.790988  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220325012643-262786/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.011977766s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220325020743-262786 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220325020743-262786 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220325020743-262786 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220325020743-262786 --alsologtostderr -v=3
E0325 02:08:55.877676  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory
E0325 02:08:56.094008  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220325014919-262786/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220325020743-262786 --alsologtostderr -v=3: (20.201991336s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786: exit status 7 (102.486811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220325020743-262786 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (324.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220325020743-262786 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220325020743-262786 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3: (5m24.273657668s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (324.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220325015306-262786 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220325015306-262786 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220325015306-262786 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220325015306-262786 --alsologtostderr -v=3: (5.912309054s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786: exit status 7 (117.398851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220325015306-262786 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-czq6n" [76b16d51-b731-4ee5-b833-b0e9834660c6] Running
E0325 02:14:38.635101  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220325014920-262786/client.crt: no such file or directory
E0325 02:14:40.295099  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013685178s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-czq6n" [76b16d51-b731-4ee5-b833-b0e9834660c6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00664823s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220325020743-262786 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220325020743-262786 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220325020743-262786 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786: exit status 2 (393.076748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786: exit status 2 (396.457667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220325020743-262786 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220325020743-262786 -n embed-certs-20220325020743-262786
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220325021454-262786 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220325021454-262786 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0: (54.924511071s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220325021454-262786 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220325021454-262786 --alsologtostderr -v=3
E0325 02:16:00.416232  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220325011821-262786/client.crt: no such file or directory
E0325 02:16:03.340586  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220325014921-262786/client.crt: no such file or directory
E0325 02:16:05.105096  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220325012358-262786/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220325021454-262786 --alsologtostderr -v=3: (20.176202601s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786: exit status 7 (103.393786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220325021454-262786 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220325021454-262786 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0
E0325 02:16:12.031952  262786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220325021454-262786 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.4-rc.0: (34.315391025s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220325020326-262786 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220325020326-262786 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220325020326-262786 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220325020326-262786 --alsologtostderr -v=3: (10.057597649s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220325020326-262786 -n no-preload-20220325020326-262786: exit status 7 (103.976862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220325020326-262786 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220325021454-262786 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220325021454-262786 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786: exit status 2 (384.254278ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786: exit status 2 (398.890318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220325021454-262786 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220325021454-262786 -n newest-cni-20220325021454-262786
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220325020956-262786 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220325020956-262786 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220325020956-262786 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220325020956-262786 --alsologtostderr -v=3: (9.481068931s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (9.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220325020956-262786 -n default-k8s-different-port-20220325020956-262786: exit status 7 (100.843481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220325020956-262786 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    

Test skip (25/267)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.4-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.4-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.4-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:36: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:457: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:547: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:89: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20220325014919-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220325014919-262786
--- SKIP: TestNetworkPlugins/group/kubenet (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220325014920-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220325014920-262786
--- SKIP: TestNetworkPlugins/group/flannel (0.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220325020956-262786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220325020956-262786
--- SKIP: TestStartStop/group/disable-driver-mounts (0.48s)

                                                
                                    
Copied to clipboard